web-archive-it.com » IT » C » CONECTA.IT

Total: 359

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • carlodaffara.conecta.it « Open source software-based business models researchcarlodaffara.conecta.it
    previews of common file formats There should be a fast 5 minute section to show that basic activities can be performed easily I prefer the following list web browsing showing compatibility with sites like FaceBook Hi5 Google Mail changing desktop properties like backgrounds or colours connecting to WiFi networks printer recognition and setup package installation I know that Ubuntu or OpenSUSE or Fedora users will complain that those are functionalities that are nowadays taken for granted But consider what even technical journalist sometimes may write about Linux It booted like a real OS with the familiar GUI of Windows XP and its predecessors and of the Mac OS icons for disks and folders a standard menu structure and built in support for common hardware such as networks printers and DVD burners Booted like a real OS And icons So much for the change in perspective like the Vista user perception problem demonstrated So a pictorial presentation is a good media to provide an initial fear reducing informative presentation that will not require assistance from the shop staff On the same side a small informative session may be prepared we suggested a 8 page booklet for the assistants to provide answers comparable to that offered for Windows machines Usability of modern linux distribution is actually good enough to be comparable to that of Windows XP on most tasks In a thesis published in 2005 the following graph was presented using data from previous work by Relevantive The time and difficulty of tasks was basically the same most of the problems that were encountered by users were related to bad naming of the applications The main usability problems with the Linux desktop system were clarity of the icons and the naming of the applications Applications did not include anything concerning their function in their name This made it really hard for users to find the right application they were looking for This approach was substantially improved in recent desktop releases adding a suffix to most applications for example GIMP image editor instead of GIMP As an additional result the following were the subjective questionnaire results 87 of the Linux test participants enjoyed working with the test system XP 90 78 of the Linux test participants believed they would be able to deal with the new system quickly XP 80 80 of the Linux test participants said that they would need a maximum of one week to achieve the same competency as on their current system XP 85 92 of the Linux test participants rated the use of the computers as easy XP 95 This provides evidence than when properly presented a Linux desktop can provide a good end user experience The other important part is related to applications two to five screenshots for every major application will provide an initial perception that the machine is equally capable of performing the most common tasks and equally important is the fact that such applications need to be pre installed and ready to use And with ready to use I mean with all the potential enhancements that are available but not installed like the extended GIMP plugin collection that is available under Ubuntu as gimp plugin registry or the various thesauri and cliparts for OpenOffice org A similar activity may be performed with regards to games that should be already installed and available for the end user Some installers for the most requested games may be added using wine through a pre loader and installer like PlayOnLinux we found that in recent Wine builds performance is quite good and in general better than that of proprietary repackaging like Cedega One suggestion that we added is to have a separate set of repository from which to update the various packages to allow for pre testing of package upgrades before they reach the end users This for example would allow for the creation of alternate packages outside of the Ubuntu main repositories that guarantee the functionality of the various hardware part even if the upstream driver changes like it recently happened with the inclusion of the new Atheros driver line in the kernel that complicated the upgrade process for netbooks with this kind of hardware chipset The cost and complexity of this activity is actually fairly low requiring mainly bandwidth and storage something that in the time of Amazon and cloud computing has a much lower impact and limited human intervention The next variable is social acceptance and is much more nuanced and difficult to assess it also changes in a significant way from country to country so it is more difficult for me to provide simple indications One aspect that we found quite effective is the addition on the side of the machine of a simple hologram similar to that offered by proprietary software vendor to indicate a legitimate origin of the software We found that a significant percentage of potential users looked actually in the back or the side of the machine to see if such a feature was present fearing that the machine could possibly be loaded with pirated software Another important aspect is related to the message that is correlated to the acquisition one common error is to mark the machine as the lowest cost a fact that provides two negative messages the fact that the machine is somehow for poors and the fact that value a complex multidimensional variable is collapsed only on price making it difficult to provide the message that the machine is really more about value for money than money This is similar to how Toyota invaded the US car market by focusing both on low cost and quality and making sure that value was perceived in every moment of the transaction from when the potential customer entered the show room to when the car was bought In fact it would be better to have a combined pricing that is slightly higher than the lowest possible price to make sure that there is a psychological anchoring While price sensitive users are along with enthusiasts those that up to now drove the adoption of Linux on the desktop it is necessary to extend this market to the more general population this means that purely price based approaches are not effective anymore As for the last aspect facilitating conditions the main hurdle perceived is the lack of immediate assistance by peers something that is nearly guaranteed with Windows thanks to the large installed base So a feature that we suggested is the addition of an instant chat icon on the desktop to ask for help and brings back a set of web pages with some of the most commonly asked questions and links to online fora The real need for such a feature is somehow reduced by the fact that the hardware is preintegrated and that pre testing is performed before package update but is a powerful psychological reassurance and should receive a central point in the desktop Equally important the inclusion of non electronic documentation that allows for easy browsing before the beginning of a computing session A very good example is the linux starter pack an introductory magazine like guide that can be considered as an example We discovered that plain well built Linux desktops are generally well accepted with limited difficulties most users after 4weeks are proficient and generally happy of their new user environment 4 Comments The dynamics of OSS adoptions II diffusion processes Posted by cdaffara in OSS business models OSS data Uncategorized on February 27th 2009 followup post of the dynamics of OSS adoption 1 The most common process behind OSS adoption is called diffusion and is usually modelled using a set of differential equations It is based on the idea that the market is made of a set of interoperating agents each one deciding independently which technology to adopt in different moments the model is usually capable of handling multiple participants in a market and to predict overall evolution A good example of a diffusion based dynamic equilibrium is the web server market when total server numbers are used If we take the data from Netcraft and we model each individual server type as a competitor we got this kind of graph Which is consistent with a traditional Bass Model explanation data for apache was added to that of Google Web server that is Apache based bicubic smoothing was used to get the trend lines Diffusion models tend to generate this kind of equilibrium lines with the market that in a more or less consistent way moves to an equilibrium that changes only when a specific technology is substituted by moving to another different status The probability of choosing one technology over the other depends on several factors a very good model for such adoption is the UTAUT model some pdf examples here and here that was found capable of predicting 70 of the variance of adoption success what it means that the parameters in the model explain nearly perfectly whether you will adopt a technology or not The important point to remember this is about individual adoption not mandated and without external constraints In this sense we can use it to predict how a PC owner chooses her web browser or how a small company may choose which web server to use The model uses four parameters performance expectancy effort expectancy social influence and facilitating conditions performance expectancy The degree to which a person believes that using a particular system would enhance his or her job performance or the degree to which using an innovation is perceived as being better than using its precursor effort expectancy the degree to which a person believes that using a system would be free of effort or the degree to which a system is perceived as relatively difficult to understand and use social influence The individual s internalization of the reference group s subjective culture and specific interpersonal agreements that the individual has made with others in specific social situations or the degree to which use of an innovation is perceived to enhance one s image or status in one s social system facilitating conditions Reflects perceptions of internal and external constraints on behaviour and encompasses self efficacy resource facilitating conditions and technology facilitating conditions or objective factors in the environment that observers agree make an act easy to do including the provision of computer support In the next post I will present an example of these four parameters in the context of an OSS adoption 2 Comments Random walks and Microsoft Posted by cdaffara in OSS business models OSS data blog divertissements on February 26th 2009 Sometimes talking about Microsoft and Open Source software is difficult because it seems to have many heads looking into different directions At the Stanford Accel Symposium Bob Muglia president of Microsoft s Server and Tools Business was bold enough to say that at some point At some point almost all our product s will have open source in them If MySQL or Linux do a better job for you of course you should use those products Of course we all know that even Steve Ballmer mentioned that I agree that no single company can create all the hardware and software Openness is central because it s the foundation of choice a fact for which Matt Asay commented with some irony that openness claims are mainly directed towards competitors like Apple and its iTunes iPod offer I would like just to point out to one of the Comes vs Microsoft exhibits that are sometimes more interesting than your average John Grisham or Stephen King novels where we can find such pearls of openness and freedom of choice From Peter Wise Sent Monday October 07 2002 9 43 AM To Server Platform Leadership Team Subject CompHot Escalation Team Summary Month of September 2002 CompHot Escalation Team Summary Month of September 2002 Microsoft Confidential Observations and Issues Linux infestations are being uncovered in many of our large accounts as part of the escalation engagements People on the escalation team have gone into AXA Ford WalMart the US Army and other large enterprises where they ve helped block one Linux threat only to have it pop up in other parts of the businesses At General Electric alone at least five major pilots have been identified as well as a new Center of Excellence for Linux at GE Capitol Infestation is not exactly the word I would use to express the idea of customer choice but you know how the software world is a battle zone I am so relieved to see that they are now really perceiving open source as part of their ecosystem 1 Comment Transparency and dependability for external partners Posted by cdaffara in OSS business models OSS data blog on February 25th 2009 As a consultant it happens frequently to answer questions about what makes open source better Not only for some adopter but for companies and integrators that form a large network ecosystem that up to now had only proprietary software vendors as source of software and technology Many IT projects had to integrate and create workarounds for bugs in proprietary components because no feedback on status was available Mary Jo Foley writes on the lack of feedback to beta testers from Microsoft During a peak week in January we the Windows dev team were receiving one Send Feedback report every 15 seconds for an entire week and to date we ve received well over 500 000 of these reports Microsoft has fixes in the pipeline for nearly 2 000 bugs in Windows code not in third party drivers or applications that caused crashes or hangs That s great Microsoft is getting a lot of feedback about Windows 7 What kind of feedback are testers getting from the team in return Very little I get lots of e mail from testers asking me whether Microsoft has fixed specific bugs that have been reported on various comment boards and Web sites I have no idea and neither do they emphasis mine Open source if well managed is radically different I had a conversation with a customer just a few minutes ago asking for specifics on a bug encountered in Zimbra answered simply by forwarding the link to the Zimbra dashboard Not to be outdone Alfresco has a similar openness Or one of my favourite examples OpenBravo Transparency pays becuase it provides a direct handle on development and provides a feedback channel for the eventual network of partners or consultancies that are living off an open source product This kind of transparency is becoming more and more important in our IT landscape because time constraints and visibility of development are becoming even more important than pure monetary considerations and allows for adopters to eventually plan for alternative solutions depending on the individual risks and effort estimates 1 Comment On business models and their relevance Posted by cdaffara in OSS business models OSS data on February 24th 2009 Matthew Aslett has a fantastic summary post that provides a sort of synthesis of some of the previous debates on what is an OSS business model and how this model impacts the performance of a company along with the usual sensible comments There are a few points that I would like to make It is probably true that a pure service based company is less interesting for VC looking for an equity investment by service based I mean Product specialists companies that created or maintain a specific software project and use a pure FLOSS license to distribute it The main revenues are provided from services like training and consulting from the FLOSSMETRICS guide Every service based model of this kind is limited by the high percentage of non repeatable work that should be done by humans so the profit margins are lower than those of the average software industry or of other OSS models On the other hand unconstrained distribution facilitated by the clear unambiguous model and single license in many cases compensates for this lower margin by increasing the effectiveness of marketing messages Tarus Balog notes For those companies trying to make billions of dollars on software quickly the only way to do that in today s market is with the hybrid model where much of the revenue comes from closed software licenses That s right at the moment this seems the only possible road to a 1B company What I am not convinced of is that this is in itself such a significant goal after all the importance of being big is related to the fact that bigger companies have the capability of creating more complex solutions or to be capable of servicing customers across the globe But in OSS complex solutions can be created by engineering several separate components reducing the need of a larger entity creating things from scratch and cooperation between companies in different geographical areas may provide a reasonable offering with a much smaller overhead the bigger the company the less is spent in real R D and support A smaller but not small company may still be able to provide excellent quality and stability with a more efficient process that translates into more value for dollar for the customer I believe that in the long term the market equilibrium will be based on a set of service based companies providing high specialization and development consortia providing core economies of scale After all there is a strong economic incentive to move development outside of companies and in reduce coding effort through reuse Here is an example from the Nokia Maemo platform In this slide from Erkko Anttila s thesis more data in this previous post it is possible to see how development effort and cost was shifted from the beginning of the project to the end The real value comes from being able to concentrate on differentiating user centered applications those can be still developed in a closed way if the company believes that this gives them greater value

    Original URL path: http://carlodaffara.conecta.it/page/11/ (2016-02-18)
    Open archived version from archive

  • The new FLOSSMETRICS project liveliness parameters « carlodaffara.conecta.itcarlodaffara.conecta.it
    project Black The balance shows a really high number of people leaving the project CM SRA 7 Average age of people working on a project This metric is focused on the average of years worked by each developer With this approximation we are able to know of members are approaching this limit and we can estimate future effort needs Green The longevity is older than 3 years Yellow The longevity is older than 2 years and younger than 3 years Red The longevity is older than 1 year and younger than 2 years Black The longevity is younger than 1 year CM SRA 8 Evolution of people who contribute to the source code and reporting bugs A way to retrieve this data is to analyze those committers and reporters with the same nickname Taking into account the slope of the resultant line y mx b while measuring the aggregated number and periods of one year Green if m 0 Yellow if m 0 Red if m 0 Black if there are no new submitters for several periods CM SRA 9 Same metric than above but this is the sum of all of them and not the evolution General number We can measure the size of a community Taking into account the slope of the resultant line y mx b while measuring the aggregated number and periods of one year Green if m 0 Yellow if m 0 Red if m 0 Black if there are no new submitters for several periods CM IWA 1 An event is defined as any kind of activity measurable from a community Generally speaking posts commits or bug reports Monthly analysis will provide a general view of the project and its tendency Taking into account the slope of the resultant line y mx b while measuring the aggregated number and periods of one year Green if m 0 Yellow if m 0 Red if m 0 Black if there are no new submitters for several periods CM IWA 2 Monthly analysis will provide a general view of the project In this way an increase or decrease in the number of commits will show the tendency of the community Taking into account the slope of the resultant line y mx b while measuring the aggregated number and periods of one year Green if m 0 Yellow if m 0 Red if m 0 Black if there are no new submitters for several periods CM IWA 3 Number of people working on old releases out of total work on the project We can determine how supported are the old releases for maintenance purposes Green More than 10 Yellow Between 5 and 10 Red Between 0 and 5 Black Nobody CM IWA 4 Looking at the number of committers per each file This metric shows the territoriality in a project Generally speaking most of the files are touched or handled by just one committers It means that high levels of orphaning may be seen as a risk situation

    Original URL path: http://carlodaffara.conecta.it/the-new-flossmetrics-project-liveliness-parameters/index.html (2016-02-18)
    Open archived version from archive

  • cdaffara « carlodaffara.conecta.itcarlodaffara.conecta.it
    VirtualBox image Wednesday August 18th 2010 Tags EveryDesk open source OSS adoption Posted in EveryDesk 3 Comments Oracle Google the strategy behind Sun Oracle and the OSS implications Sunday August 15th 2010 Tags open source oracle google lawsuit Posted in blog divertissements 14 Comments Oracle Google the patents and the implications Friday August 13th 2010 Tags FLOSS open source software patents Posted in blog divertissements 34 Comments Estimating source to product costs for OSS an experiment Tuesday August 10th 2010 Tags FLOSS open source OSS business models Posted in OSS business models OSS data 3 Comments About contributions Canonical and adopters Friday July 30th 2010 Tags open source Posted in blog divertissements 2 Comments The basis of OSS business models property and efficiency Monday July 26th 2010 Tags open source OSS adoption OSS business models Posted in OSS business models 6 Comments The relationship between Open Core dual licensing and contributions Wednesday July 21st 2010 Tags open source OSS adoption OSS business models Posted in OSS adoption OSS business models 5 Comments Older Entries Newer Entries blog divertissements EveryDesk OSS adoption OSS business models OSS data Uncategorized April 2015 M T W T F S S Aug 1 2 3

    Original URL path: http://carlodaffara.conecta.it/author/admin/page/4/index.html (2016-02-18)
    Open archived version from archive

  • OSS adoption « carlodaffara.conecta.itcarlodaffara.conecta.it
    adoption OSS business models on May 6th 2009 Matt Asay just published a post titled Community is an overhyped word in software where he collects several observations and basically states that Most people don t contribute any software any bug fixes any blog mentions or any anything to open source projects including those from which they derive considerable value They just don t Sure there are counterexamples to this but they re the exception not the rule While true to some extent the way the post is presented seems to imply that only commercial contributions are really of value as he states later So if you want to rely on a community to build your product for you good luck You re going to need it as experience suggests that hard work by a committed core team develops great software whether its Linux or Microsoft SharePoint not some committee masquerading as a community This is somewhat true and somewhat false and this dichotomy depends on the fact that community is an undefined word in this context Two years ago I gave an interview to Roberto Galoppini and one of the questions and answer was What is your opinion about the community Alessandro Rubini is right in expressing disbelief in a generic community there are organized communities that can be recognized as such Debian or Gentoo supporters are among them but tend to be an exception and not the rule Most software do not have a real community outside of the developers and eventually some users of a single company it takes a significant effort to create an external support pyramid core contributors marginal contributors lead users that adds value If that happens like in Linux or the ObjectWeb consortium the external contributions can be of significant value we observed even in very specialized projects a minimum of 20 of project value from external contributors I still believe that by leaving the underlying idea of community undefined Matt does collate together many different collaboration patterns that should really not be placed together In the mentioned example the 20 was the result of an analysis of contribution to the OpenCascade project a very specialized CAD toolkit As I mention in my guide In the year 2000 fifty outside contributors to Open Cascade provided various kinds of assistance transferring software to other systems IRIX 64 bits Alpha OSF correcting defects memory leaks and translating the tutorial into Spanish etc Currently there are seventy active contributors and the objective is to reach one hundred These outside contributions are significant Open Cascade estimates that they represent about 20 of the value of the software In a similar way Aaron Seigo listed the many different ways contribution are counted in KDE and noticed how those contributions are mostly not code based Artwork Documentation Human computer interaction Marketing Quality Assurance Software Development Translation Or take the contributors area map from OpenOffice org While the yellow area is code related lots of other contributors are outside of that and help in localization dissemination and many other ancillary activities that are still fundamental for the success of a project The Packt survey that Matt mentions is explicit in the kind of contribution it was mentioned Despite this apparent success individual donations play an important role in its development Its team still maintains a page on the project website requesting monetary donations which they utilize for the promotion of phpMyAdmin This highlights the importance of individual contributions and how they still play a vital role in sustaining and opening up open source projects to a larger audience This kind of monetary contribution is the exception not the role and using this data point to extend it to the fact that most projects are not dependent on external contributions or do so in limited way is an unwarranted logic jump I must say that I am more in agreement with Tarus Balog that in his post called humorously sour grapes wrote The fact that marketing people can t squeeze value out of community doesn t mean that communities don t have value OpenNMS is a complex piece of software and it takes some intense dedication to get to the point where one can contribute code I don t expect anyone to sit down and suddenly dedicate hours and hours of their life working on it Plus I would never expect someone to contribute anything to OpenNMS unless they started out with some serious free loader time This resonates with my research experience where under the correct conditions communities of contributors provide a non trivial benefit to the vendor on the other hand as we found in our previous FLOSSMETRICS research monetization barrier can be a significant hurdle for external disengaged participation and this may explain why companies that use an open core or dual licensing model tend to see no external community at all On the other hand when community participation is welcomed and there is no cross selling external participations may provide significant added value to a project A good example is Funambol that has one of the best community managers I can think of and a Twitter post I recently read about them HUGE contribution to funambol MS Exchange connector from mailtrust Way to go community rocks Are commercial OS providers really interested in dismissing this kind of contributions as irrelevant FLOSS open source OSS adoption 5 Comments Economic Free Software perspectives Posted by cdaffara in OSS business models OSS data on May 4th 2009 How do you make money with Free Software was a very common question just a few years ago Today that question has evolved into What are successful business strategies that can be implemented on top of Free Software This is the beginning of a document that I originally prepared as an appendix for an industry group white paper as I received many requests for a short data concrete document to be used in university courses on the economics of FLOSS I think that this may be useful as an initial discussion paper A pdf version is available here for download Data and text was partially adapted from the results of the EU projects FLOSSMETRICS and OpenTTT open source business models and adoption of OSS within companies COSPA adoption of OSS by public administrations in Europe CALIBRE and INES open source in industrial environments I am indebted with Georg Greve of FSFE that wrote the excellent introduction more details on the submission here and that kindly permitted redistribution This text is licensed under CC by SA attribution sharealike 3 0 I would grateful for an email to indicate use of the text as a way to keep track of it at cdaffara conecta it Free Software defined 1985 is defined by the freedoms to use study share improve Synonyms for Free Software include Libre Software c a 1991 Open Source 1998 FOSS and FLOSS both 200X For purposes of this document this usage is synonymous with Open Source by the Open Source Initiative OSI Economic Free Software Perspectives Introduction How do you make money with Free Software was a very common question just a few years ago Today that question has evolved into What are successful business strategies that can be implemented on top of Free Software In order to develop business strategies it is first necessary to have a clear understanding of the different aspects that you seek to address Unfortunately this is not made easier by popular ambiguous use of some terms for fundamentally different concepts and issues e g Open Source being used for a software model development model or business model These models are orthogonal like the three axes of the three dimensional coordinate system their respective differentiators are control software model collaboration development model revenue business model The software model axis is the one that is discussed most often On the one hand there is proprietary software for which the vendor retains full control over the software and the user receives limited usage permission through a license which is granted according to certain conditions On the other hand there is Free Software which provides the user with unprecedented control over their software through an ex ante grant of irrevocable and universal rights to use study modify and distribute the software The development model axis describes the barrier to collaboration ranging from projects that are developed by a single person or vendor to projects that allow extensive global collaboration This is independent from the software model There is proprietary software that allows for far reaching collaboration e g SAP with it s partnership program and Free Software projects that are developed by a single person or company with little or no outside input The business model axis describes what kind of revenue model was chosen for the software Options on this axis include training services integration custom development subscription models Commercial Off The Shelve COTS Software as a Service SaaS and more These three axes open the space in which any software project and any product of any company can freely position itself That is not to say all these combinations will be successful A revenue model based on lock in strategies with rapid paid upgrade cycles is unlikely to work with Free Software as the underlying software model This approach typically occurs on top of a proprietary software model for which the business model mandates a completed financial transaction as one of the conditions to grant a license It should be noted that the overlap of possible business models on top of the different software models is much larger than usually understood The ex ante grant of the Free Software model makes it generally impossible to attach conditions to the granting of a license including the condition of financial transaction But it is possible to implement very similar revenue streams in the business model through contractual constructions trademarks and or certification Each of these axes warrants individual consideration and careful planning for the goals of the project If for instance the goal is to work with competitors on a non differentiating component in order to achieve independence from a potential monopolistic supplier it would seem appropriate to focus on collaboration and choose a software model that includes a strong Copyleft licence The business model could potentially be neglected in this case as the expected return on investment comes in the form of strategic independence benefits and lower licence costs In another case a company might choose a very collaborative community development model on top of a strong Copyleft licence with a revenue model based on enterprise ready releases that are audited for maturity stability and security by the company for its customers The number of possible combinations is almost endless and the choices made will determine the individual character and competitive strengths and weaknesses of each company Thinking clearly about these parameters is key to a successful business strategy Strategic use of Free Software vs Free Software Companies According to Gartner usage of Free Software will reach 100 percent by November 2009 That makes usage of Free Software a poor criterion for what makes a Free Software company Contribution to Free Software projects seems a slightly better choice but as many Free Software projects have adopted a collaborative development model in which the users themselves drive development that label would then also apply to companies that aren t Information Technology IT companies IT companies are among the most intensive users of software and will often find themselves as part of a larger stack or environment of applications Being part of that stack their use of software not only refers to desktops and servers used by the company s employees but also to the platform on top of which the company s software or solution is provided Maintaining proprietary custom platforms for a solution is inefficient and expensive and depending upon other proprietary companies for the platform is dangerous In response large proprietary enterprises have begun to phase out their proprietary platforms and are moving towards Free Software in order to leverage the strategic advantages provided by this software model for their own use of software on the platform level These companies will often interact well with the projects they depend upon contribute to them and foster their growth as a way to develop strategic independence as a user of software What makes these enterprises proprietary is that for the parts where they are not primarily users of software but suppliers to their downstream customers the software model is proprietary withholding from its customers the same strategic benefits of Free Software that the company is using to improve its own competitiveness From a customer perspective that solution itself becomes part of the platform on which the company s differentiating activities are based This as stated before is inefficient expensive and a dangerous strategy Assuming a market perspective it represents an inefficiency that provides business opportunity for other companies to provide customers with a stack that is Free Software entirely and it is strategically and economically sane for customers to prefer those providers over proprietary ones for the very same reasons that their proprietary suppliers have chosen Free Software platforms themselves Strategically speaking any company that includes proprietary software model components in its revenue model should be aware that its revenue flow largely depends upon lack of Free Software alternatives and that growth of the market as well as supernatural profits generated through the proprietary model both serve to attract other companies that will make proprietary models unsustainable When that moment comes the company can either move its revenue model to a different market or it has to transform its revenue source to work on top of a software model that is entirely Free Software So usage of and contribution to Free Software are not differentiators for what makes a Free Software company The critical differentiator is provision of Free Software downstream to customers In other words Free Software companies are companies that have adopted business models in which the revenue streams are not tied to proprietary software model licensing conditions Economic incentives of Free Software adoption The broad participation of companies and public authorities in the Free Software market is strictly related to an economic advantage in most areas the use of Free Software brings a substantial economic advantage thanks to the shared development and maintenance costs already described by researchers like Gosh that estimated an average R D cost reduction of 36 The large share of internal Free Software deployments explains why some of the economic benefits are not perceived directly in the business service market as shown by Gartner Gartner predicts that within 2010 25 of the overall software market will be Free Software based with rougly 12 of it internal to companies and administrations that adopt Free Software The remaining market still substantial is based on several different business models that monetize the software using different strategies A recent update february 2009 of the FLOSSMETRICS study on Free Software based business model is presented here after an analysis of more than 200 companies the main models identified in the market are Dual licensing the same software code distributed under the GPL and a proprietary license This model is mainly used by producers of developer oriented tools and software and works thanks to the strong coupling clause of the GPL that requires derivative works or software directly linked to be covered under the same license Companies not willing to release their own software under the GPL can obtain a proprietary license that provides an exemption from the distribution conditions of the GPL which seems desirable to some parties The downside of dual licensing is that external contributors must accept the same licensing regime and this has been shown to reduce the volume of external contributions which are limited mainly to bug fixes and small additions Open Core previously called split Free Software proprietary or proprietary value add this model distinguishes between a basic Free Software and a proprietary version based on the Free Software one but with the addition of proprietary plug ins Most companies following such a model adopt the Mozilla Public License as it allows explicitly this form of intermixing and allows for much greater participation from external contributions without the same requirements for copyright consolidation as in dual licensing The model has the intrinsic downside that the Free Software product must be valuable to be attractive for the users i e it should not be reduced to crippleware yet at the same time should not cannibalise the proprietary product This balance is difficult to achieve and maintain over time also if the software is of large interest developers may try to complete the missing functionality in Free Software thus reducing the attractiveness of the proprietary version and potentially giving rise to a full Free Software competitor that will not be limited in the same way Product specialists companies that created or maintain a specific software project and use a Free Software license to distribute it The main revenues are provided from services like training and consulting the ITSC class and follow the original best code here and best knowledge here of the original EUWG classification DB 00 It leverages the assumption commonly held that the most knowledgeable experts on a software are those that have developed it and this way can provide services with a limited marketing effort by leveraging the free redistribution of the code The downside of the model is that there is a limited barrier of entry for potential competitors as the only investment that is needed is in the acquisition of specific skills and expertise on the software itself Platform providers companies that provide selection support integration and services on a set of projects collectively forming a tested and verified platform In this sense even GNU Linux distributions were classified as platforms the interesting observation is that those distributions are licensed for a significant part under Free Software licenses to maximize external

    Original URL path: http://carlodaffara.conecta.it/tag/oss-adoption/page/3/index.html (2016-02-18)
    Open archived version from archive

  • divertissements « carlodaffara.conecta.itcarlodaffara.conecta.it
    points Before Jacobsen v Katzer commercial software developers already often avoided incorporating open source components in their offerings for fear of being stripped of ownership rights While software development benefits from peer review and transparency of process facilitated by open source the resulting licenses by their terms could require those using any open source code to disclose all associated source code and distribute incorporated works royalty free Following Jacobsen v Katzer commercial software developers should be even more cautious of incorporating any open source code in their offerings Potentially far greater monetary remedies not to mention continued availability of equitable relief make this vehicle one train to board with caution Let s skip the fact that the law practitioners that wrote this jewel of law journalism are part of the firm White Case that represented Microsoft in the EU Commission s first antitrust action let s skip the fact that terms like infection and the liberal use of commercial hides the same error already presented in other pearls of legal wisdom already debated here the reality is that the entire frame of reference is based on an assumption that I heard the first time from a lawyer working for a quite large firm that since open source software is free companies are entitled to do whatever they want with it Of course it s a simplification I know many lawyers and paralegals that are incredibly smart Carlo Piana comes to mind but to this people I propose the following gedankenexperiment imagine that within the text of the linked article every mention to open source was magically replaced with proprietary source code The federal circuit ruling would more or less stay unmodified but the comment of the writers would assume quite hysterical properties Because they would argue that proprietary software is extremely dangerous because if Microsoft just as an example found parts of its source code included inside of another product they would sue the hell out of the poor developer that would be unable to use the Cisco defence to claim that Open Source crept into its products and thus damages should be minimal The reality is that the entire article is written with a focus that is non differentiating in this sense there is no difference between OSS and proprietary code Exactly like for proprietary software taking open source code without respecting the license is not allowed the RIAA would say that it is stealing and that the company is a pirate So dear customers of White Case stay away from open source at all costs while we will continue to reap its benefits 5 Comments Dissecting words for fun and profit or how to be a few years too late Posted by cdaffara in OSS business models OSS data divertissements on April 3rd 2009 So after finishing a substantial part of our work on FLOSSMETRICS yesterday I believe that I deserve some fun And I cannot ask more than a new flame inducing post from a patent attorney right here that claims that open source will destroy the software industry just waiting to be dissected and evaluated he may be right right Actually not but as I have to rest somehow between my research duties with the Commission I decided to prepare a response after all the writer is a fellow EE electrical engineer and so he will probably enjoy some response to his blog post Let s start by stating that the idea that OSS will destroy the software industry is not new after all it is one of the top 5 myths from Navica and while no one tried to say that in front of me I am sure that it was quite common a few years ago Along with the idea that software helps terrorists Now that foreign intelligence services and terrorists know that we plan to trust Linux to run some of our most advanced defense systems we must expect them to deploy spies to infiltrate Linux The risk is particularly acute since many Linux contributors are based in countries from which the U S would never purchase commercial defense software Some Linux providers even outsource their development to China and Russia from Green Hills Software CEO Dan O Dowd So let s read and think about what Gene Quinn writes It is difficult if not completely impossible to argue the fact that open source software solutions can reduce costs when compared with proprietary software solutions so I can completely understand why companies and governments who are cash starved would at least consider making a switch and who can fault them for actually making the switch Nice beginning quite common in debate strategy first concede something to the opponent Then use the opening to push something unrelated The question I have is whether this is in the long term best interest of the computing software industry What is happening is that open source solutions are forcing down pricing and the race to zero is on Here we take something that is acknowledge that OSS solutions are reducing costs thus creating a pressure on pricing and then we attach a second logically unconnected term the race to zero is on Who says that the reduction in pricing leads to a reduction to zero No one with an economics background The reality is that competition brings down prices theoretically in a perfectly competitive environment made of equal products bringing the price down to the marginal cost of production Which is of course not zero as any software company will happily tell you Because the cost of producing copies of software is very small but the cost of creating supporting maintaining documenting software is not zero This does not take into account the fact that some software companies enjoy profit margins unheard of and this explains why there is such a rush by users in at least experimenting with potentially cost saving measures a s zero is approached however less and less money will be available to be made proprietary software giants will long since gone belly up and leading open source companies such as Red Hat will not be able to compete Of course since zero is not approached the phrase is logically useless what is the color of my boat any as you like as I don t own one But let s split it in parts anyway of course if zero is approached software giants will go belly up But why RedHat will not be able to compete Compete with what If all proprietary companies will disappear and only OSS companies remains then the market actually increases even with increasingly small revenues the same effect that can be witnessed in some mobile data markets with the reduction in price of SMS you see an increase in the number of messages sent resulting in an increase in revenues It is quite possible that the open source movement will ultimately result in a collapse of the industry and that would not be a good thing Still following the hypothetical theory that software pricing will go to zero that as I said is not grounded in reality here the author takes the previous considerations and uses a logical trick he says that the proprietary companies will disappear here he says that there will be a collapse of the industry not of the proprietary industry This way he collapses the concept of the software industry that includes the proprietary and the non proprietary actors and conveniently avoids the non proprietary part Of course this is still not grounded in anything logical The conclusion is obvious that would not be a good thing Of course this is another rhetoric form by adding a grounding in something that is emotionally or ethically based we introduce an external negative perception in the reader strengthening what is still an hypothesis And then the avoidance trap I am sure that many open source advocates who are reading this are already irate and perhaps even yelling that this Quinn guy doesn t know what he is talking about I am used to it by now I get it all the time It is after all much easier to simply believe that someone you disagree with is clueless rather than question your own beliefs This approach is so commonly used that is now beginning to show its age use the fact that someone may be irate at reading the article to dismiss all critics as clueless people unable to question beliefs The use of this word is another standard tactics simply removing the idea that the personal position of an OSS adopter depends on illogic faith based assumptions this of course would be difficult to defend in an academic environment where we assume that researchers are not faith based in their studies So this is an approach commonly used in online forum blogs and such that are meant for a general audience It is a mistake though to dismiss what I am saying here or any of my other writings on computer software and open source Of course I am dismissing it for the content of what you write not because of my beliefs and I have not read anything else from you so I am not dismissing what I have not read The fact that I am a patent attorney undoubtedly makes many in the open source movement immediately think I simply don t understand technology and my writings that state computer software is not math have only caused mathematicians and computer scientists to believe I am a quack This is totally unrelated to the previous arguments who was talking of software patents anyway We were talking about the role of OSS in terms of competition with the proprietary software market and about potential effects to revenues U nlike most patent attorneys I do get it and that is probably why my writings can be so offensive to the true believers I am not only a patent attorney but I am an electrical engineer who specializes in computer technologies including software and business method technologies I write software code and whether you agree with me or not telling me I simply don t understand is not intellectually compelling Of course being part of a class of people like EE is in itself not qualifying in any way any comment I made up to now would be equally applicable independently of the author claiming to get it or implying that someone don t get it because he works as a patent attorney is silly and here the author falls in the same fallacy By the way I know some patent attorneys that perfectly get it along with others that believe that open source software is made by fairies in the forest As I said being member of a class is in itself useless in deciding the truth of a statement I do get it and the reality is that open source software is taking us in a direction that should scare everyone Here the author uses the fallacy of membership discussed before and uses it as a authority power I do get it I am qualified then I am saying the truth And what I am saying is that OSS is dangerous and the fact that anyone else apart from O Dowd that believes that Linux will be infiltrated by terrorists is not perceiving the problem is due to the fact that they are not looking with enough attention Sun Microsystems is struggling to say the least and the reality is that they are always going to struggle because they are an open source company which means that the only thing they can sell is service Sun Microsystems is struggling for a long time now unfortunately I always loved their products Personally I believe that the new CEO is doing quite a turnaround on the company that has languished for a long time on a shrinking highly lucrative market like SGI did in the past but that is better left to financial analysts Anyway their financial results were not that good even before the OSS turnaround imposed by Jonathan Schwartz and so there is no real linking between the two part of the phrase on the contrary the OSS part is growing nicely while the large scale enterprise server part is decreasing fast It also introduces an additional error that is the fact that being OSS means that you can sell only services The author clearly has not read much on OSS business models but he should not worry I would be happy to send some papers on the subject Whenever you sell time earning potential is limited There are only so many hours in the day and only so much you can charge by the hour When you have a product that can be replicated whether it be a device a piece of proprietary software or whatever you have the ability to leverage which simply doesn t exist when you are selling yourself by the hour Of course this is the reality of consulting This however does not stop companies like IBM Global Services Accenture and friends to live off consulting simply by asking very high prices for a day of a specialized consultant Or you can find groups like the 451 or RedMonk that are more efficient and targeted towards special markets So there is a realistic ceiling on the revenue that can be earned by any open source company and that ceiling is much lower than any proprietary software company So assuming that by the hour services is the only OSS business model possible and that the price per hour cannot match that of large consulting firm then there is a revenue ceiling that is lower than that of proprietary software companies The fact that both parts of the phrase are unsustained by arguments makes the conclusion unproven It is also an undeniable truth that the way many if not most service companies compete is by price When service companies try and get you to switch over they will promise to provide the same or better service for a lower price This should be a supporting argument for the fact that OSS companies charge a lower per hour price of competing companies and uses Sun as an example Of course it continues to be an unsupported argument even considering the fact that the author probably never paid a receipt for a Sun consultant or would have discovered that their pricing is in line with the rest of the market The trouble with freeware is that there is no margin on free and while open source solutions are not free the race to asymptotically approach free is on hence why I say the race to zero is in full swing Now the author switches from OSS to freeware to remind us that Open Source is after all free Probably RMS would say at this point free as in free speech not free as in free beer but his ideas would be probably dismissed The use of free here is made to create the appearance of a logical connection between freeware and open source of course the author acqnowledges that OSS is not free but as part of the same family they are participating in the asymptotically approach free race to zero As stated before in a perfect competition the race is not to zero but to the marginal cost so using freeware is a way to imply that this cost is zero as well when the reality is that it is not zero but lower than writing everything from scratch thanks to the reuse opportunity And then we move to something completely different as Monty Python would say Unfortunately many in the patent legal community are engaging in the race to zero as well For example there are patent attorneys and patent agents who advertise online claiming to be able to draft and file a complete patent application for under 3 000 One of the most common ads running provides patent applications for 2 800 and I have seen some agents advertise prices as low as 1 400 for a relatively simple mechanical invention The race to zero is in full swing with respect to patent services aimed at independent inventors and start up companies It is also being pushed by major companies who want large law firms to provide patent services for fees ranging from 3 500 to 7 000 per application This is forcing many large patent law firms to simply not offer patent drafting and prosecution services any longer There are major law firms that are seeking to outsource such work hoping to still keep the client for litigation purposes and to negotiate business deals Dear writer this is called competition And as before it is not a race to zero as you will never find an attorney doing this kind of service for free without any attachment or if they do they will probably go out of business leaving the market Does anyone really think that paying 1 400 for an allegedly complete patent application is a wise business decision I can t imagine that if you say that to yourself out loud it would sound like such a goo d idea Well IF the author can prove that application quality and price are correlated then this becomes a decision based on economics principles and depends on the hypothetical future value of the patent measures of indirect value and so on If the correlation is not strict then any rational actor would simply seek the lowest possible price Likewise Fortune 500 companies that are pushing prices down and wanting to pay only 3 500 for a patent application can t really expect to get much if any worthwhile protection Do they I suppose they do but the reality is that they don t The reality is that when you are drafting a patent application you can ALWAYS make it

    Original URL path: http://carlodaffara.conecta.it/category/divertissements/page/4/index.html (2016-02-18)
    Open archived version from archive

  • OSS business models « carlodaffara.conecta.itcarlodaffara.conecta.it
    is where the vendors clearly show the underlying misunderstanding on how open source works you can still sell your assembly of hardware and software as with EAL it is the combination of both that is certified not the software in isolation and continue the current business model It is doubtful that the open source community as mentioned in the paper will ever certify the code as it is a costly and substantial effort exactly like no individual applied to EAL4 certification for Linux that requires a substantial amount of money The various vendors would probably do something better if they started a collaborative effort for a minimum denominator system to be used as a basis for their system in a way similar to that performed by mobile phone companies in the LiMo and Android projects or through industry consortia like Eclipse They could still be introducing differentiating aspects in the hardware and upper layer software while reducing the costs of R D and improving the transparency of a critical component of our modern democracies No Comments MXM patents and licenses clarity is all it takes Posted by cdaffara in OSS business models OSS data blog on April 10th 2009 Recently on the OSI mailing list Carlo Piana wrote a proposed license for the reference implementation of the ISO IEC 23006 MPEG eXtensible Middleware MXM The license is derived from the MPL with the removal of some of the patent conditions from the text of the original license and clearly creates a legal boundary conditions that grants patent rights only for those who compile it only for internal purposes without direct commercial exploitation I tend to agree on Carlo s comment My final conclusion is that if the BSD family is considered compliant so shall be the MXM as it does not condition the copyright grant to the obtaining of the patents just as the BSD licenses don t deal with them And insofar an implementer is confident that the part of the code it uses if free from the patented area or it decided to later challenge the patent in case an infringement litigation is threatened the license works just fine as a side note I am completely and totally against software patents and I am confident that Carlo Piana is absolutely against them as well Having worked in the italian ISO JTC1 chapter I also totally agree with one point the sad truth is that if we did not offer a patent agnostic license we would have made all efforts to have an open source reference implementation moot Unfortunately ISO still believes that patents are something that is necessary to convince companies to participate in standard groups despite the existence of standard groups that do work very well without this policy my belief is that the added value of standardization in terms of cost reductions are well worth the cost of participating in the creation of complex standards like MPEG but this is for another post What I would like to make clear is that the real point is not if the proposed MXM license is OSI compliant or not the important point is why you want it to be open source Let s consider the various alternatives the group believes that an open source implementation may receive external effort much like the traditional open source projects and thus reduce maintenance and extension effort If this is the aim then the probability of having this kind of external support is quite low as companies would avoid it as the license would not allow in any case a commercial use with an associated patent license and researchers working in the area would have been perfectly satisfied with any kind of academic or research only license the group wants to increase the adoption of the standard and the reference implementation should be used as a basis for further work to turn it into a commercial product This falls in the same cathegory as before why should I look at the reference implementation if it does not grant me any potential use The group could have simply published the source code for the reference and said if you want to use it you should pay us a license for the embedded patents the group wants to have a golden standard to benchmark external implementations for example to see that the bitstreams are compliant Again there is no need for having an open source license The reality is that there is no clear motivation behind making this under an open source license because the clear presence of patents on the implementation makes it risky or non free to use for any commercial exploitation Microsoft for example did it much better to avoid losing their rights to enforce their patents they paid or supported other companies to create a patent covered software and released it under an open source license Since the secondary companies do not hold any patent with the releasing of the code they are not relieving any threat from the original Microsoft IPR and at the same time they use a perfectly acceptable OSI approved license As the purpose of the group is twofold increase adoption of the standards make commercial user pay for the IPR licensing I would propose a different alternative since the real purpose is to get paid for the patents or to be able to enforce them in case of commercial competitors why don t you dual license it with the strongest copyleft license available at the moment the AGPL This way any competitor would be forced to be fully AGPL and so any improvement would have to be shared exchanging the lost licensing revenue for the maintenance cost reduction or to pay for the license turning everything into the traditional IPR licensing scheme I know I know this is wishful thinking Carlo I understand your difficult role 2 Comments Another hypocrite post Open Source After Jacobsen v Katzer Posted by cdaffara in OSS business models OSS data divertissements on April 8th 2009 The reality is that I am unable to resist To see a post containing idiotic comments on open source masqueraded as a serious article makes me start giggling with I have to write them something my coworkers are used to it they sometimes comment with another post is arriving or something more humorous The post of today is a nicely written essay from Jonathan Moskin Howard Wettan and Adam Turkelon Law com with the title Open Source After Jacobsen v Katzer referring to a recent US Federal Circuit decision The main point of the ruling is the Federal Circuit s recognition that the terms in an open source license can create enforceable conditions to use of copyrighted materials that is the fact that software licenses in this case the Artistic License that limit redistribution are enforceable Not only this but the fact that the enforceability is also transferable because Jacobsen confirmed that a licensee can be liable for copyright infringement for violating the conditions of an open source license the original copyright owner may now have standing to sue all downstream licensees for copyright infringement even absent direct contractual privity This is the starting point for a funny tirade like Before Jacobsen v Katzer commercial software developers often avoided incorporating open source components in their offerings for fear of being stripped of ownership rights Following Jacobsen commercial software developers should be even more cautious the article headline in the Law com front page to It is perhaps also the most feared for its requirement that any source code compiled with any GPL licensed source code be publicly disclosed upon distribution often referred to as infection emphasis mine Infection And the closing points Before Jacobsen v Katzer commercial software developers already often avoided incorporating open source components in their offerings for fear of being stripped of ownership rights While software development benefits from peer review and transparency of process facilitated by open source the resulting licenses by their terms could require those using any open source code to disclose all associated source code and distribute incorporated works royalty free Following Jacobsen v Katzer commercial software developers should be even more cautious of incorporating any open source code in their offerings Potentially far greater monetary remedies not to mention continued availability of equitable relief make this vehicle one train to board with caution Let s skip the fact that the law practitioners that wrote this jewel of law journalism are part of the firm White Case that represented Microsoft in the EU Commission s first antitrust action let s skip the fact that terms like infection and the liberal use of commercial hides the same error already presented in other pearls of legal wisdom already debated here the reality is that the entire frame of reference is based on an assumption that I heard the first time from a lawyer working for a quite large firm that since open source software is free companies are entitled to do whatever they want with it Of course it s a simplification I know many lawyers and paralegals that are incredibly smart Carlo Piana comes to mind but to this people I propose the following gedankenexperiment imagine that within the text of the linked article every mention to open source was magically replaced with proprietary source code The federal circuit ruling would more or less stay unmodified but the comment of the writers would assume quite hysterical properties Because they would argue that proprietary software is extremely dangerous because if Microsoft just as an example found parts of its source code included inside of another product they would sue the hell out of the poor developer that would be unable to use the Cisco defence to claim that Open Source crept into its products and thus damages should be minimal The reality is that the entire article is written with a focus that is non differentiating in this sense there is no difference between OSS and proprietary code Exactly like for proprietary software taking open source code without respecting the license is not allowed the RIAA would say that it is stealing and that the company is a pirate So dear customers of White Case stay away from open source at all costs while we will continue to reap its benefits 5 Comments See you in Brussels the European OpenClinica meeting Posted by cdaffara in OSS business models OSS data blog on April 8th 2009 In a few days the 14th of April I will be attending as a panelist the first European OpenClinica meeting in the regulatory considerations panel It will be a wonderful opportunity to meet all the other OpenClinica users and developers and in general talk and share experiences As I will stay there for the evening I would love to invite all friends and open source enthusiasts that happen to be in Brussels that night for a chat and a Belgian beer As for those that are not aware of OpenClinica it is a shining example of open source software for health care it is a Java based server system that allows to create secure web forms for clinical data acquisition and much more The OpenClinica software platform supports clinical data submission validation and annotation data filtering and extraction study auditing de identification of Protected Health Information PHI and much more It is distributed under the LGPL and does have some really nice features like the design of forms using spreadsheets extremely intuitive We have used it in several regional and national trials and even trialed it as a mobile data acquisition platform If you can t be in Brussels but are interested in open source health care check out OpenClinica 2 Comments Reliability of open source from a software engineering point of view Posted by cdaffara in OSS business models OSS data on April 6th 2009 At the Philly ETE conference Michael Tiemann presented some interesting facts about open source quality and in particular mentioned that open source software has an average defect density that is 50 150 times lower than proprietary software As it stands this statement is somewhat incorrect and I would like to provide a small clarification of the context and the real values First of all the average that is mentioned by Michael is related to a small number of projects in particular the Linux kernel the Apache web server and later the entire LAMP stack and a small number of additional famous projects For all of these projects the reality is that the defect density is substantially lower than that of comparable proprietary products A very good article on this is Succi Paulson Eberlein An Empirical Study of Open Source and Closed Source Software Products IEEE TRANSACTIONS ON SOFTWARE ENGINEERING V 30 4 april 2004 where the study was performed It was not the only study on the subject but all pointed at more or less the same results Other than the software engineering community some results from companies working in the code defect identification industry also published some results like Reasoning Inc A Quantitative Analysis of TCP IP Implementations in Commercial Software and in the Linux Kernel and How Open Source and Commercial Software Compare Database Implementations in Commercial Software and in MySQL All results confirm the much higher quality in terms of defect per line of code of the academic research Additional research identified a common pattern the initial quality of the source code is roughly the same for proprietary and open source but the defect density decreases in a much faster way with open source So it s not the fact that OSS coders are on average code wonders but that the process itself creates more opportunity for defect resolution on average As Succi et al pointed out In terms of defects our analysis finds that the changing rate or the functions modified as a percentage of the total functions is higher in open source projects than in closed source projects This supports the hypothesis that defects may be found and fixed more quickly in open source projects than in closed source projects and may be an added benefit for using the open source development model emphasis mine I have a personal opinion on why this happens and is really related to two different phenomenons the first aspect is related to code reuse the general modularity and great reuse of components is in fact helping developers because instead of recoding something introducing new bugs the reuse of an already debugged component reduces the overall defect density This aspect was found in other research groups focusing on reuse for example in a work by Mohagheghi Conradi Killi and Schwarz called An Empirical Study of Software Reuse vs Defect Density and Stability available here we can find that reuse introduces a similar degree of improvement in the bug density and the trouble report numbers of code As it can be observed from the graph code originated from reuse has a significant higher quality compared to traditional code and the gap between the two grows with the size as expected from basic probabilistic models of defect generation and discovery The second aspect is that the fact that bug data is public allows a prioritization and a better coordination of developers on triaging and in general fixing things This explains why this faster improvement appears not only in code that is reused but in newly generated code as well the sum of the two effects explains the incredible difference in quality 50 150 times higher than any previous effort like formal methods automated code generation and so on And this quality differential can only grow with time leading to a long term push for proprietary vendor to include more and more open source code inside of their own products to reduce the growing effort of bug isolation and fixing 7 Comments Dissecting words for fun and profit or how to be a few years too late Posted by cdaffara in OSS business models OSS data divertissements on April 3rd 2009 So after finishing a substantial part of our work on FLOSSMETRICS yesterday I believe that I deserve some fun And I cannot ask more than a new flame inducing post from a patent attorney right here that claims that open source will destroy the software industry just waiting to be dissected and evaluated he may be right right Actually not but as I have to rest somehow between my research duties with the Commission I decided to prepare a response after all the writer is a fellow EE electrical engineer and so he will probably enjoy some response to his blog post Let s start by stating that the idea that OSS will destroy the software industry is not new after all it is one of the top 5 myths from Navica and while no one tried to say that in front of me I am sure that it was quite common a few years ago Along with the idea that software helps terrorists Now that foreign intelligence services and terrorists know that we plan to trust Linux to run some of our most advanced defense systems we must expect them to deploy spies to infiltrate Linux The risk is particularly acute since many Linux contributors are based in countries from which the U S would never purchase commercial defense software Some Linux providers even outsource their development to China and Russia from Green Hills Software CEO Dan O Dowd So let s read and think about what Gene Quinn writes It is difficult if not completely impossible to argue the fact that open source software solutions can reduce costs when compared with proprietary software solutions so I can completely understand why companies and governments who are cash starved would at least consider making a switch and who can fault them for actually making the switch Nice beginning quite common in debate strategy first concede something to the opponent Then use the opening to push something unrelated The question I have is whether this is in the long term best interest of the computing software industry What is happening is that open source solutions are forcing down pricing and the race to zero is on Here we take something that is acknowledge that OSS solutions are reducing costs thus creating a pressure on pricing and then we attach a second logically unconnected term the race to zero is on Who says that the reduction in pricing leads to a reduction to zero No one with an economics background The reality is that competition brings down prices theoretically in a perfectly competitive environment made of equal products bringing the price down to the marginal cost of production Which is of course not zero as any software company will happily tell you Because the cost of producing copies of software is very small but the cost of creating supporting maintaining documenting software is not zero This does not take into account the fact that some software companies enjoy profit margins unheard of and this explains why there is such a rush by users in at least experimenting with potentially cost saving measures a s zero is approached however less and less money will be available to be made proprietary software giants will long since gone belly up and leading open source companies such as Red Hat will not be able to compete Of course since zero is not approached the phrase is logically useless what is the color of my boat any as you like as I don t own one But let s split it in parts anyway of course if zero is approached software giants will go belly up But why RedHat will not be able to compete Compete with what If all proprietary companies will disappear and only OSS companies remains then the market actually increases even with increasingly small revenues the same effect that can be witnessed in some mobile data markets with the reduction in price of SMS you see an increase in the number of messages sent resulting in an increase in revenues It is quite possible that the open source movement will ultimately result in a collapse of the industry and that would not be a good thing Still following the hypothetical theory that software pricing will go to zero that as I said is not grounded in reality here the author takes the previous considerations and uses a logical trick he says that the proprietary companies will disappear here he says that there will be a collapse of the industry not of the proprietary industry This way he collapses the concept of the software industry that includes the proprietary and the non proprietary actors and conveniently avoids the non proprietary part Of course this is still not grounded in anything logical The conclusion is obvious that would not be a good thing Of course this is another rhetoric form by adding a grounding in something that is emotionally or ethically based we introduce an external negative perception in the reader strengthening what is still an hypothesis And then the avoidance trap I am sure that many open source advocates who are reading this are already irate and perhaps even yelling that this Quinn guy doesn t know what he is talking about I am used to it by now I get it all the time It is after all much easier to simply believe that someone you disagree with is clueless rather than question your own beliefs This approach is so commonly used that is now beginning to show its age use the fact that someone may be irate at reading the article to dismiss all critics as clueless people unable to question beliefs The use of this word is another standard tactics simply removing the idea that the personal position of an OSS adopter depends on illogic faith based assumptions this of course would be difficult to defend in an academic environment where we assume that researchers are not faith based in their studies So this is an approach commonly used in online forum blogs and such that are meant for a general audience It is a mistake though to dismiss what I am saying here or any of my other writings on computer software and open source Of course I am dismissing it for the content of what you write not because of my beliefs and I have not read anything else from you so I am not dismissing what I have not read The fact that I am a patent attorney undoubtedly makes many in the open source movement immediately think I simply don t understand technology and my writings that state computer software is not math have only caused mathematicians and computer scientists to believe I am a quack This is totally unrelated to the previous arguments who was talking of software patents anyway We were talking about the role of OSS in terms of competition with the proprietary software market and about potential effects to revenues U nlike most patent attorneys I do get it and that is probably why my writings can be so offensive to the true believers I am not only a patent attorney but I am an electrical engineer who specializes in computer technologies including software and business method technologies I write software code and whether you agree with me or not telling me I simply don t understand is not intellectually compelling Of course being part of a class of people like EE is in itself not qualifying in any way any comment I made up to now would be equally applicable independently of the author claiming to get it or implying that someone don t get it because he works as a patent attorney is silly and here the author falls in the same fallacy By the way I know some patent attorneys that perfectly get it along with others that believe that open source software is made by fairies in the forest As I said being member of a class is in itself useless in deciding the truth of a statement I do get it and the reality is that open source software is taking us in a direction that should scare everyone Here the author uses the fallacy of membership discussed before and uses it as a authority power I do get it I am qualified then I am saying the truth And what I am saying is that OSS is dangerous and the fact that anyone else apart from O Dowd that believes that Linux will be infiltrated by terrorists is not perceiving the problem is due to the fact that they are not looking with enough attention Sun Microsystems is struggling to say the least and the reality is that they are always going to struggle because they are an open source company which means that the only thing they can sell is service Sun Microsystems is struggling for a long time now unfortunately I always loved their products Personally I believe that the new CEO is doing quite a turnaround on the company that has languished for a long time on a shrinking highly lucrative market like SGI did in the past but that is better left to financial analysts Anyway their financial results were not that good even before the OSS turnaround imposed by Jonathan Schwartz and so there is no real linking between the two part of the phrase on the contrary the OSS part is growing nicely while the large scale enterprise server part is decreasing fast It also introduces an additional error that is the fact that being OSS means that you can sell only services The author clearly has not read much on OSS business models but he should not worry I would be happy to send some papers on the subject Whenever you sell time earning potential is limited There are only so many hours in the day and only so much you can charge by the hour When you have a product that can be replicated whether it be a device a piece of proprietary software or whatever you have the ability to leverage which simply doesn t exist when you are selling yourself by the hour Of course this is the reality of consulting This however does not stop companies like IBM Global Services Accenture and friends to live off consulting simply by asking very high prices for a day of a specialized consultant Or you can find groups like the 451 or RedMonk that are more efficient and targeted towards special markets So there is a realistic ceiling on the revenue that can be earned by any open source company and that ceiling is much lower than any proprietary software company So assuming that by the hour services is the only OSS business model possible and that the price per hour cannot match that of large consulting firm then there is a revenue ceiling that is lower than that of proprietary software companies The fact that both parts of the phrase are unsustained by arguments makes the conclusion unproven It is also an undeniable truth that the way many if not most service companies compete is by price When service companies try and get you to switch over they will promise to provide the same or better service for a lower price This should be a supporting argument for the fact that OSS companies charge a lower per hour price of competing companies and uses Sun as an example Of course it continues to be an unsupported argument even considering the fact that the author probably never paid a receipt for a Sun consultant or would have discovered that their pricing is in line with the rest of the market The trouble with freeware is that there is no margin on free and while open source solutions are not free the race to asymptotically approach free is on hence why I say the race to zero is in full swing Now the author switches from OSS to freeware to remind us that Open Source is after all free Probably RMS would say at this point free as in free speech not free as in free beer but his ideas would be probably dismissed The use of free here is made to create the appearance of a logical connection between freeware and open source of course the author acqnowledges that OSS is not free but as part of the same family they are participating in the asymptotically approach free race to zero As stated before in a perfect competition the race is not to zero but to the marginal cost so using freeware is a way to imply that this cost is zero as well when the reality is that it is not zero but lower than writing everything from scratch thanks to the reuse opportunity And then we move to something completely different as Monty Python would say Unfortunately many in the patent legal community are engaging in the race to zero as well For example there are patent attorneys and patent agents who advertise online claiming to be able to draft and file a complete patent application for under 3 000 One of the most common ads running provides patent applications for 2 800 and I have seen some agents advertise prices as low as 1 400 for a relatively simple mechanical invention The race to zero is in full swing with respect to patent services aimed at independent inventors and start up companies It is also being pushed by major companies who want large law firms to provide patent services for fees ranging from 3 500 to 7 000 per application This is forcing many large patent law firms to simply not offer patent drafting and prosecution services any longer There are major law firms that are seeking to outsource such work hoping to still keep the client for litigation purposes and to negotiate business deals Dear writer this is called competition And as before it is not a race to zero as you will never find an attorney doing this kind of service for free without any attachment or if they do they will probably go out of business leaving the market Does anyone really think that paying 1 400 for an allegedly complete patent application is a wise business decision I can t imagine that if you say that to yourself out loud it would sound like such a goo d idea Well IF the author can prove that application quality and price are correlated then this becomes a decision based on economics principles and depends on the hypothetical future value of the patent measures of indirect value and so on If the correlation is not strict then any rational actor would simply seek the lowest possible price Likewise Fortune 500 companies that are pushing prices down and wanting to pay only 3 500 for a patent application can t really expect to get much if any worthwhile protection Do they I suppose they do but the reality is that they don t The reality is that when you are drafting a patent application you can ALWAYS make it better by spending more time But to think that you can force a patent attorney or agent to spend the same length of time working on a project whether you pay under 3 500 7 000 or 10 000 is naïve Everyone inherently knows this to be true but somehow convinces themselves otherwise So Fortune 500 companies are managed by morons that don t understand the value of spending more time I suspect it is for a lack of culture or a lack of perception of value both can be cured by promotion and dissemination of information Still this does not applies to Open Source As companies continue to look for the low cost solution quality is sacrificed Ah Here s the connection As for patent applications software has the same correlation quality price Now I full well realize that much of the open source software is better than proprietary software and I know that it can be much cheaper to rely on open source solutions than to enter into a license agreement for proprietary software but I can t say it loud thinks the author or they will burn me alive So let s change the subject again B ut where is that going to lead us Once mighty Sun Microsystems is hanging on for dear life and is that who you want to be relying on to provide service for your customized open source solutions What if Sun simply disappears Can you trust a company like Sun that by using OSS is destroying itself Or are you thinking about using OSS and take the risk of being such a dying corpse yourself So let s make sure that the poor moron that thinks that OSS can save money understand the risks by bringing another example gyms I remember years ago I joined a gym and purchased a yearly membership only to have the gym close less than 2 months later A similar thing happened to my wife several years ago when she bought a membership to a fitness and well being company who shall remain nameless Eat better and get exercise counseling and support what a deal Of course it was a deal only until the company filed for bankruptcy and left all its members high and dry Luckily I put off joining myself otherwise we would have been out two memberships after less than 30 days Of course the parallel between gyms and software companies is not so strict and is not related to OSS at all examples abound of what happens in all sectors At least with OSS you have the source code and you can do something yourself With once mighty companies falling left and right do you really want to bet the IT future of your company or organization on an industry whose business model is the race to zero So dear author the race is not to zero and yes I would bet it on open source so at least I am free to continue to use your gym even after it has closed 4 Comments The new FLOSSMETRICS project liveliness parameters Posted by cdaffara in OSS business models OSS data blog on April 2nd 2009 While working on the final edition of our FLOSSMETRICS guide on OSS I received the new automated estimation procedures from the other participants in the project and the QUALOSS people namely Daniel Izquierdo Santiago Dueñas and Jesus Gonzales Barahona from the Departamento de Sistemas Telemáticos y Computación GSyC of the Universidad Rey Juan Carlos The new parameters will be included soon in the automated project page that is created in the FLOSSMETRICS database here is an example for the Epiphany web browser and will feature a very nice colour coded scheme that provides an at a glance view of the risks or strengths of a project A nice feature of FLOSSMETRICS is the fact that it provides information not only on code but on ancillary metrics like mailing lists committers participation and so on and all the tools code and databases are open source Now on with the variables ID Measurement Procedure Idea New Indicators CM SRA 1 Retrieving the date of the first bug for each member of the community we are able to know if the number of new member reporting bugs remains stable Taking into account the slope of the resultant line y mx b while measuring the aggregated number and periods of one year Green if m 0 Yellow if m 0 Red if m 0 Black if there are no new submitters for several periods CM SRA 2 Retrieving the date of the first commit for each member of the community we are able to know if the

    Original URL path: http://carlodaffara.conecta.it/category/oss-business-models/page/4/index.html (2016-02-18)
    Open archived version from archive

  • OSS data « carlodaffara.conecta.itcarlodaffara.conecta.it
    with my network Let s start with performance expectancy Most Linux pc are really very low cost substandard machines assembled with the overall idea that price is the only sensitive point In this sense while true that Linux and open source allows for far greater customizability and speed it is usually impossible to compensate for extreme speed differences this means that to be able to satisfy the majority of users we cannot aim for the lowest possible price A good estimate of the bill of materials is the median of the lowest quartile of the price span of current PC in the market approximately 10 to 20 more than the lowest price After the hardware is selected our suggestion is to use a standard linux distribution like Ubuntu and add to it any necessary component that will make it work out of the box Why a standard distribution Because this way users will have not only a potential community of peers to ask for help but the cost of maintaining it will be spread as an example most tailor made Linux distributions for NetBooks are not appealing because they employ old version of software packages This provides an explanation of why Dell had so much success in selling Linux netbooks compared to other vendors with one third of the netbooks sold with plain Ubuntu Having a standard distribution reduces costs for the technology provider provides a safety mechanism for the reseller chain that is not dependent on a single company and provides the economic advantage of a cost less license that increase the chain margin Effort expectancy what is the real expectancy of the user Where do the user obtains his informations from The reality is that most potential adopters get their information from peers magazines and in many cases from in store exploration and talks with store clerks The clear preference that most users demonstrate towards Windows really comes from a rational reasoning based on incomplete information the user wants to use the PC to perform some activites he knows that to perform such activities software is needed he knows that Windows has lots of software so Windows is a safe bet The appearance of Apple OS X demonstrated that this reasoning can be modified for example by presenting a nicer user experience OS X owners get in contact with other potential adopters are shown a different environment that seems to be capable of performing the most important talks and so the diffusion process can happen For the same process to be possible with Linux we must improve the knowledge of users to show them that normal use is no more intimidating than that of Windows and that software is available for the most common tasks This requires two separate processes one to show that the basic desktop is capable of performing traditional tasks easily and another to show what kind of software is available My favourite way for doing this for in store experiences is through a demo video usually played in continuous rotation that shows some basic activities for example how Network Manager provides a simple one click way to connect to WiFi or how Nautilus provides previews of common file formats There should be a fast 5 minute section to show that basic activities can be performed easily I prefer the following list web browsing showing compatibility with sites like FaceBook Hi5 Google Mail changing desktop properties like backgrounds or colours connecting to WiFi networks printer recognition and setup package installation I know that Ubuntu or OpenSUSE or Fedora users will complain that those are functionalities that are nowadays taken for granted But consider what even technical journalist sometimes may write about Linux It booted like a real OS with the familiar GUI of Windows XP and its predecessors and of the Mac OS icons for disks and folders a standard menu structure and built in support for common hardware such as networks printers and DVD burners Booted like a real OS And icons So much for the change in perspective like the Vista user perception problem demonstrated So a pictorial presentation is a good media to provide an initial fear reducing informative presentation that will not require assistance from the shop staff On the same side a small informative session may be prepared we suggested a 8 page booklet for the assistants to provide answers comparable to that offered for Windows machines Usability of modern linux distribution is actually good enough to be comparable to that of Windows XP on most tasks In a thesis published in 2005 the following graph was presented using data from previous work by Relevantive The time and difficulty of tasks was basically the same most of the problems that were encountered by users were related to bad naming of the applications The main usability problems with the Linux desktop system were clarity of the icons and the naming of the applications Applications did not include anything concerning their function in their name This made it really hard for users to find the right application they were looking for This approach was substantially improved in recent desktop releases adding a suffix to most applications for example GIMP image editor instead of GIMP As an additional result the following were the subjective questionnaire results 87 of the Linux test participants enjoyed working with the test system XP 90 78 of the Linux test participants believed they would be able to deal with the new system quickly XP 80 80 of the Linux test participants said that they would need a maximum of one week to achieve the same competency as on their current system XP 85 92 of the Linux test participants rated the use of the computers as easy XP 95 This provides evidence than when properly presented a Linux desktop can provide a good end user experience The other important part is related to applications two to five screenshots for every major application will provide an initial perception that the machine is equally capable of performing the most common tasks and equally important is the fact that such applications need to be pre installed and ready to use And with ready to use I mean with all the potential enhancements that are available but not installed like the extended GIMP plugin collection that is available under Ubuntu as gimp plugin registry or the various thesauri and cliparts for OpenOffice org A similar activity may be performed with regards to games that should be already installed and available for the end user Some installers for the most requested games may be added using wine through a pre loader and installer like PlayOnLinux we found that in recent Wine builds performance is quite good and in general better than that of proprietary repackaging like Cedega One suggestion that we added is to have a separate set of repository from which to update the various packages to allow for pre testing of package upgrades before they reach the end users This for example would allow for the creation of alternate packages outside of the Ubuntu main repositories that guarantee the functionality of the various hardware part even if the upstream driver changes like it recently happened with the inclusion of the new Atheros driver line in the kernel that complicated the upgrade process for netbooks with this kind of hardware chipset The cost and complexity of this activity is actually fairly low requiring mainly bandwidth and storage something that in the time of Amazon and cloud computing has a much lower impact and limited human intervention The next variable is social acceptance and is much more nuanced and difficult to assess it also changes in a significant way from country to country so it is more difficult for me to provide simple indications One aspect that we found quite effective is the addition on the side of the machine of a simple hologram similar to that offered by proprietary software vendor to indicate a legitimate origin of the software We found that a significant percentage of potential users looked actually in the back or the side of the machine to see if such a feature was present fearing that the machine could possibly be loaded with pirated software Another important aspect is related to the message that is correlated to the acquisition one common error is to mark the machine as the lowest cost a fact that provides two negative messages the fact that the machine is somehow for poors and the fact that value a complex multidimensional variable is collapsed only on price making it difficult to provide the message that the machine is really more about value for money than money This is similar to how Toyota invaded the US car market by focusing both on low cost and quality and making sure that value was perceived in every moment of the transaction from when the potential customer entered the show room to when the car was bought In fact it would be better to have a combined pricing that is slightly higher than the lowest possible price to make sure that there is a psychological anchoring While price sensitive users are along with enthusiasts those that up to now drove the adoption of Linux on the desktop it is necessary to extend this market to the more general population this means that purely price based approaches are not effective anymore As for the last aspect facilitating conditions the main hurdle perceived is the lack of immediate assistance by peers something that is nearly guaranteed with Windows thanks to the large installed base So a feature that we suggested is the addition of an instant chat icon on the desktop to ask for help and brings back a set of web pages with some of the most commonly asked questions and links to online fora The real need for such a feature is somehow reduced by the fact that the hardware is preintegrated and that pre testing is performed before package update but is a powerful psychological reassurance and should receive a central point in the desktop Equally important the inclusion of non electronic documentation that allows for easy browsing before the beginning of a computing session A very good example is the linux starter pack an introductory magazine like guide that can be considered as an example We discovered that plain well built Linux desktops are generally well accepted with limited difficulties most users after 4weeks are proficient and generally happy of their new user environment 4 Comments The dynamics of OSS adoptions II diffusion processes Posted by cdaffara in OSS business models OSS data Uncategorized on February 27th 2009 followup post of the dynamics of OSS adoption 1 The most common process behind OSS adoption is called diffusion and is usually modelled using a set of differential equations It is based on the idea that the market is made of a set of interoperating agents each one deciding independently which technology to adopt in different moments the model is usually capable of handling multiple participants in a market and to predict overall evolution A good example of a diffusion based dynamic equilibrium is the web server market when total server numbers are used If we take the data from Netcraft and we model each individual server type as a competitor we got this kind of graph Which is consistent with a traditional Bass Model explanation data for apache was added to that of Google Web server that is Apache based bicubic smoothing was used to get the trend lines Diffusion models tend to generate this kind of equilibrium lines with the market that in a more or less consistent way moves to an equilibrium that changes only when a specific technology is substituted by moving to another different status The probability of choosing one technology over the other depends on several factors a very good model for such adoption is the UTAUT model some pdf examples here and here that was found capable of predicting 70 of the variance of adoption success what it means that the parameters in the model explain nearly perfectly whether you will adopt a technology or not The important point to remember this is about individual adoption not mandated and without external constraints In this sense we can use it to predict how a PC owner chooses her web browser or how a small company may choose which web server to use The model uses four parameters performance expectancy effort expectancy social influence and facilitating conditions performance expectancy The degree to which a person believes that using a particular system would enhance his or her job performance or the degree to which using an innovation is perceived as being better than using its precursor effort expectancy the degree to which a person believes that using a system would be free of effort or the degree to which a system is perceived as relatively difficult to understand and use social influence The individual s internalization of the reference group s subjective culture and specific interpersonal agreements that the individual has made with others in specific social situations or the degree to which use of an innovation is perceived to enhance one s image or status in one s social system facilitating conditions Reflects perceptions of internal and external constraints on behaviour and encompasses self efficacy resource facilitating conditions and technology facilitating conditions or objective factors in the environment that observers agree make an act easy to do including the provision of computer support In the next post I will present an example of these four parameters in the context of an OSS adoption 2 Comments Random walks and Microsoft Posted by cdaffara in OSS business models OSS data blog divertissements on February 26th 2009 Sometimes talking about Microsoft and Open Source software is difficult because it seems to have many heads looking into different directions At the Stanford Accel Symposium Bob Muglia president of Microsoft s Server and Tools Business was bold enough to say that at some point At some point almost all our product s will have open source in them If MySQL or Linux do a better job for you of course you should use those products Of course we all know that even Steve Ballmer mentioned that I agree that no single company can create all the hardware and software Openness is central because it s the foundation of choice a fact for which Matt Asay commented with some irony that openness claims are mainly directed towards competitors like Apple and its iTunes iPod offer I would like just to point out to one of the Comes vs Microsoft exhibits that are sometimes more interesting than your average John Grisham or Stephen King novels where we can find such pearls of openness and freedom of choice From Peter Wise Sent Monday October 07 2002 9 43 AM To Server Platform Leadership Team Subject CompHot Escalation Team Summary Month of September 2002 CompHot Escalation Team Summary Month of September 2002 Microsoft Confidential Observations and Issues Linux infestations are being uncovered in many of our large accounts as part of the escalation engagements People on the escalation team have gone into AXA Ford WalMart the US Army and other large enterprises where they ve helped block one Linux threat only to have it pop up in other parts of the businesses At General Electric alone at least five major pilots have been identified as well as a new Center of Excellence for Linux at GE Capitol Infestation is not exactly the word I would use to express the idea of customer choice but you know how the software world is a battle zone I am so relieved to see that they are now really perceiving open source as part of their ecosystem 1 Comment Transparency and dependability for external partners Posted by cdaffara in OSS business models OSS data blog on February 25th 2009 As a consultant it happens frequently to answer questions about what makes open source better Not only for some adopter but for companies and integrators that form a large network ecosystem that up to now had only proprietary software vendors as source of software and technology Many IT projects had to integrate and create workarounds for bugs in proprietary components because no feedback on status was available Mary Jo Foley writes on the lack of feedback to beta testers from Microsoft During a peak week in January we the Windows dev team were receiving one Send Feedback report every 15 seconds for an entire week and to date we ve received well over 500 000 of these reports Microsoft has fixes in the pipeline for nearly 2 000 bugs in Windows code not in third party drivers or applications that caused crashes or hangs That s great Microsoft is getting a lot of feedback about Windows 7 What kind of feedback are testers getting from the team in return Very little I get lots of e mail from testers asking me whether Microsoft has fixed specific bugs that have been reported on various comment boards and Web sites I have no idea and neither do they emphasis mine Open source if well managed is radically different I had a conversation with a customer just a few minutes ago asking for specifics on a bug encountered in Zimbra answered simply by forwarding the link to the Zimbra dashboard Not to be outdone Alfresco has a similar openness Or one of my favourite examples OpenBravo Transparency pays becuase it provides a direct handle on development and provides a feedback channel for the eventual network of partners or consultancies that are living off an open source product This kind of transparency is becoming more and more important in our IT landscape because time constraints and visibility of development are becoming even more important than pure monetary considerations and allows for adopters to eventually plan for alternative solutions depending on the individual

    Original URL path: http://carlodaffara.conecta.it/category/oss-data/page/4/index.html (2016-02-18)
    Open archived version from archive

  • open source « carlodaffara.conecta.itcarlodaffara.conecta.it
    extremely simplified DCT which is so un DCT like that it often referred to as the HCT H 264 Cosine Transform instead This simplified transform results in roughly 1 worse compression but greatly simplifies the transform itself which can be implemented entirely with adds subtracts and right shifts by 1 VC 1 uses a more accurate version that relies on a few small multiplies numbers like 17 22 10 etc VP8 uses an extremely needlessly accurate version that uses very large multiplies 20091 and 35468 The third difference is that the Hadamard hierarchical transform is applied for some inter blocks not merely i16 16 unlike H 264 the hierarchical transform is luma only and not applied to chroma For quantization the core process is basically the same among all MPEG like video formats and VP8 is no exception personal note quantization methods are mostrly from MPEG1 2 where most patents are already expired see the list of expired ones in MPEG LA list VP8 uses a scheme much less flexible than H 264 s custom quantization matrices it allows for adjusting the quantizer of luma DC luma AC chroma DC and so forth separately The killer mistake that VP8 has made here is not making macroblock level quantization a core feature of VP8 Algorithms that take advantage of macroblock level quantization are known as adaptive quantization and are absolutely critical to competitive visual quality personal note it is basically impossible to implement adaptive quantization without infringing especially for patents issued after 2000 even the relatively suboptimal MPEG style delta quantizer system would be a better option Furthermore only 4 segment maps are allowed for a maximum of 4 quantizers per frame both are patented delta quantization is part of MPEG4 and unlimited segment maps are covered VP8 uses an arithmetic coder somewhat similar to H 264 s but with a few critical differences First it omits the range probability table in favor of a multiplication Second it is entirely non adaptive unlike H 264 s which adapts after every bit decoded probability values are constant over the course of the frame probability tables are patented in all video coding implementations not only MPEG specific ones as adapting probability tables VP8 is a bit odd it chooses an arithmetic coding context based on the neighboring MVs then decides which of the predicted motion vectors to use or whether to code a delta instead because straight delta coding is part of MPEG4 The compression of the resulting delta is similar to H 264 except for the coding of very large deltas which is slightly better similar to FFV1 s Golomb like arithmetic codes Intra prediction mode coding is done using arithmetic coding contexts based on the modes of the neighboring blocks This is probably a good bit better than the hackneyed method that H 264 uses which always struck me as being poorly designed residual coding is different from both CABAC and CAVLC VP8 s loop filter is vaguely similar to H 264 s but with a few differences First it has two modes which can be chosen by the encoder a fast mode and a normal mode The fast mode is somewhat simpler than H 264 s while the normal mode is somewhat more complex Secondly when filtering between macroblocks VP8 s filter has wider range than the in macroblock filter H 264 did this but only for intra edges VP8 s filter omits most of the adaptive strength mechanics inherent in H 264 s filter Its only adaptation is that it skips filtering on p16 16 blocks with no coefficients What we can obtain from this very thorough thanks Jason analysis is the fact that from my point of view it is clear that On2 was actually aware of patents and tried very hard to avoid them It is also clear that this is in no way an assurance that there are no situation of patent infringements only that it seems that due diligence was performed Also WebM is not comparable to H264 in terms of technical sophistication it is more in line with MPEG4 VC1 but this is clearly done to avoid recent patents some of the patents on older specification are already expired for example all France Telecom patents on H264 are expired and in this sense Dark Shiraki claims that the specification is not as good as H264 is perfectly correct It is also true that x264 beats the hell on current VP8 encoders and basically every other encoder in the market despite this in a previous assessment Dark Shiraki performed a comparison of anime cartoon encoding and found that VP7 was better than Apple s own H264 encode not really that bad The point is that reference encoders are designed to be a building block and improvement in respect of possible patents in the area are certainly possible maybe not reaching the level of x264 top quality I suspect the psychovisual adaptive schema that allowed such a big gain in x264 are patented and non reproducible but it should be a worthy competitor All in all I suspect that MPEGLA rattling will remain only noise for a long long time Update many people mentioned in blog posts and comments that Sun Microsystem now Oracle in the past tried a very similar effort namely to re start development of a video codec based on past and expired patents and start from there avoiding active patents to improve its competitiveness They used the Open Media Commons IPR methodology to avoid patents and to assess patent troubles and in particular they developed an handy chart that provides a timeline of patents and their actual status image based on original chart obtained here As it can be observed most of the techniques encountered in the OMV analysis are still valid for VP8 the advanced deblocking filter of course is present in VP8 but with a different implementation It also provides additional support to the idea that On2 developers were aware of patents in the area and came out with novel ideas to work around existing patents much like Sun with its OMV initiative In the same post the OMV block structure graph includes several Sun IPR parts that are included in the OMV specification the latest version available here pdf the site is not updated anymore and that maybe may be re used with Oracle explicit permission in WebM And to answer people asking for indemnification from Google I would like to point my readers to a presentation of OMV and in particular to slide 10 While we are encouraged by our findings so far the investigation continues and Sun and OMC cannot make any representations regarding encumbrances or the validity or invalidity of any patent claims or other intellectual property rights claims a third party may assert in connection with any OMC project or work product This should put to rest the idea that Sun was indemnifying people using OMV exactly like I am not expecting such indemnification from Google or any other industry player by the way open source 36 Comments How to analyse an OSS business model part five Posted by cdaffara in OSS business models on May 19th 2010 part five of an ongoing series Previous parts part one two three four Marten Mickos of MySQL fame once said that people spend time to save money some spend money to save time This consideration is at the basis of one of the most important parameter for most OSS companies that use the open core or freemium model that is the conversion rate the percentage of people that pays for enterprise or additional functionalities versus the total amount of users With most OSS companies reaching less than 0 1 and only very few capable of reaching 1 one of the obvious goals of CEOs of said open source companies is to find a way to convert more users to paying for services or to increase the monetization rate My goal today is to show that such effort can have only a very limited success and may be even dangerous for the overall acceptance of the software project itself Let s start with an obvious concept everyone has a resource at his her disposal namely time This resource does have some interesting properties it is universal everyone has time it is inflexible there are 24 hours in a day and anything you can do will not change it efficiency work done in the unit of time does have a lower bound of zero and an higher bound that depends on many factors efficiency can vary by one order of magnitude or more Another important parameter is the price per hour for having something done At this point there is a common mistake that is assuming that there is a fixed hourly rate or at least a lower bound on hourly rate This is clearly wrong because the price per hour is the simple ratio between what someone is paying you to do the work and the amount of time required for that action so if no one pays you that ratio is zero So let s imagine someone working for a web company and one of the activities requires a database Our intrepid administrator will start learning something about MySQL will work diligently and install it ok nowadays it s nearly point and click Imagine it done a few years ago with compiles and all that stuff This system administrator will never pay for MySQL enterprise or whatever because its pay is fixed and there is no allocated budget for him to divert money to external entities So whatever is done by MySQL to monetize the enterprise version there will be simply no way to obtain money from the people that is investing time unless you sabotage the open source edition so that you are forced to pay for the enterprise one But what will happen then People will be forced to look at alternatives because in any case time is the only resource available to them This basic concept is valid even when companies do have budget available Consider the fact that the average percentage of revenues invested in ICT information and communication technology by companies is on average around 5 with some sectors investing slightly less 4 up to high tech companies investing up to 7 This percentage is nearly fixed valid for small to large companies and across countries and sectors this means that the commercial OSS company is competing for small slices of budget and its capability to win is related mainly to the perceived advantages of going enterprise versus investing personnel time Does it means that trying to increase conversion rate is useless Not exactly The point is that you cannot address those users that have no budget available as those will never be able to pay for your enhanced offering you have two different possible channels those that are using your product and may have the potential to pay or address the group of non users with the same demographics So the reality is that mining current users is potentially counterproductive and it is more sensible to focus on two interlocking efforts increase the number of adopters and make sure that people knows about the commercial offering This can be performed virally that is by creating an incentive for people to share the knowledge of your project with others which is very fast efficient and low cost however this approach does have the disadvantage that sharing will happen within a single group of peers In fact viral sharing happens within only homologous group and this means that it is less effective for reaching those users that are outside the same group for example the non users that we are pointing at This means that purely viral efforts are not capable of reaching your target you need to complement it with more traditional marketing efforts Next resource and development sharing or how to choose your license depending on your expectations of external participation open source OSS adoption OSS business models 2 Comments On open source competence centers Posted by cdaffara in OSS adoption on April 26th 2010 Just a few days ago Glynn Moody posted a tweet with the message Italy to begin an open source competence centre a result of the recent EU project Qualipso created with the purpose to identify barriers to OSS adoption quality metrics and with the explicit target of creating a network of OSS competence centers sharing the results of the research effort and disseminating it with the European community of companies and public administration For this reason the project created more than one competence center and created a network that you can find under this website to cover not only Europe but China India and Japan as well This is absolutely a great effort and I am grateful to the Commission and the project participants for their work hey they even cited my work on business models There is however an underlying attitude that I found puzzling and partially troubling as well The announcement mentioned the competence center of Italy and was worded as there was no previous such effort in that country If you go to the network website you will find no mention of any other competence center there even when you consider that the Commission already has a list of such centers not much updated though and that on OSOR there is even an official group devoted to Italian OSS competence centers among them two in Friuli disclaimer I am part of the technical board of CROSS and work in the other Tuscany Trentino Umbria Emilia as part of the PITER project a national one and many others that I probably forgot Then we have Austria Belgium Denmark Estonia Finland Germany Ireland Malta Netherlands Nordic Countries and many others What is incredible is that most of these centers actually don t link one with the other and they hardly share information The new Qualipso network of competence centers does not list any previous center nor does it point to already prepared documentation even by the Commission The competence center network website does not link to OSOR as well nor does it links to other projects past or current I still believe that competence centers are important and that they must focus on what can be done to simplify adoption or to turn adoption into a commercially sustainable ecosystem for example by facilitating the embracing of OSS packages by local software companies In the past I tried to summarize this in the following set of potential activities Creating software catalogs using an integrated evaluation model QSOS Qualipso FLOSSMETRICS anything as long as it is consistent For selected projects finds local support companies with competence in the identified solution Collect the needs of potential OSS users using standardized forms Technology Request Technology Offer TR TO to identify IT needs Find the set of OSS projects that together satisfies the Technology Request if there are still unsatisfied requirements join together several interested users to ask with a commercial offer for a custom made OSS extension or project Aggregate and restructure the information created by other actors like IST IDABC individual national initiatives OSOSS KBST COSS This models helps in overcoming several hurdles to OSS adoption Correctly identify needs and through analysis of already published TR can help in aggregating demand Helps in finding appropriate OSS solutions even when solutions are created through combination of individual pieces Helps in finding actors that can provide commercial support or know how It does have several potential advantages over traditional mediation services The center does NOT participate in the commercial exchange and in this sense acts as a pure catalyst This way it does not compete with existing OSS companies but provides increased visibility and an additional dissemination channel It remains a simple and lean structure reducing the management costs By reusing competences and information from many sources it can become a significant learning center even for OSS companies for example in the field of business models for a specific OSS project It is compatible with traditional IT incubators and can reuse most of the same structures Most of this idea revolves around the concept of sharing effort and reusing knowledge already developed in other areas or countries I find it strange that the most difficult idea among these competence centers is sharing update corrected the network project name Qualipso not Qualoss Thanks to Matteo for spotting it open source OSS adoption public administrations 8 Comments How to analyse an OSS business model part four Posted by cdaffara in OSS business models on April 12th 2010 Welcome to the fourth part of our little analysis of OSS business models first part here second part here third part here It is heavily based on the Osterwalder model and follows through the examination of our hypothetical business model after all the theoretical parts we will try to add a simple set of hands on exercises and tutorials based on a more or less real case We will focus today on the remaining parts of our model canvas with less detail as those parts are more or less covered by every business management course and will start a little bit of practical exploration to create the actors actions model that was discussed in the previous instalments Cost structure this is quite simple the costs incurred during the operation of our business model There are usually two kind of models called cost driven where the approach is minimization of costs or value driven where the approach is the maximization of value creation Most models are a combination of the two for example many companies have a low cost offering to increase market share and a value offering with an higher cost and higher overall quality In open source companies it is usually incorrect to classify the open source edition as cost driven unless a specific

    Original URL path: http://carlodaffara.conecta.it/tag/open-source/page/4/index.html (2016-02-18)
    Open archived version from archive



  •