web-archive-it.com » IT » C » CONECTA.IT

Total: 359

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • April 2009 Archives « carlodaffara.conecta.itcarlodaffara.conecta.it
    inside of our economy There is however a point that I would like to make about the distinction between pure OSS and open core licensing a point that does not imply any kind of ethical or purity measure but just a consideration on economics When we consider what OSS is and what advantage it brings to the market it is important to consider that a commercial OSS transaction usually has two concrete partners the seller the OSS vendor and the buyer that is the user If we look at the OSS world we can see that in both the pure and the open core model the vendor has the added R D sharing cost reduction that as I wrote about in the past can provide significant advantages But R D is not the only advantage the reality is that pure OSS has a great added advantage for the adopter that is the greatly reduced cost and effort of procurement With OSS the adopter can scale a single installation company wide without a single call to the legal or procurement departments and it can ask support from the OSS vendor if needed eventually after the roll out has been performed With open core the adopter is not allowed to do the same thing as the proprietary extensions are not under the same license of the open source part so if you want to extend your software to more servers you are forced to ask the vendor exactly the same of proprietary software systems This is in fact a much overlooked advantage of OSS that is especially suited to those departmental installations that would be probably prohibited if legal or acquisition department would have to be asked for budget I believe that this advantage is significant and largely hidden I started thinking about it while helping a local public administration in the adoption of an OSS based electronic data capture for clinical data and discovered that for many authorities and companies procurement selecting the product tendering tender evaluation contracting etc can introduce many months in delays and substantially increase costs For this reason we recently introduced with our customers a sort of quick test for OSS purity The acquired component is pure OSS if eventually after an initial payment the customer is allowed to perform extensions to its adoption of the component inside and outside of its legal border without the need for further negotiation with the vendor The reason for that eventually after an initial payment because the vendor may decide to release the source code only to customers this is something that is allowed by some licenses and the inside and outside of its legal border is a phrase that explicitly includes not only redistribution and usage within a single company but also to external parties that may be not part of the same legal entity This distinction may not be important for small companies but may be vital for example for public authorities that need to redistribute a software solution to a large audience of participating public bodies a recent example I found is a regional health care authority that is exploring an OSS solution to be distributed to hospital medical practitioners and private and public structures Of course this does not imply that the vendor is forced to offer services in the same way services and software are in this sense quite distinct or that the adopter should prefer pure OSS over open core in fact this is not an expression of preference for one form over the other We found this simple test to be useful especially for those new OSS adopters that are not overly interested in the intricacies of open source business models and makes for a good initial question to OSS vendors to understand what are the implication of acquiring a pure vs an open core solutions 4 Comments A brief research summary Posted by cdaffara in OSS business models OSS data blog on April 17th 2009 After two months and 24 posts I would like to thank all the kind people that mentioned our FLOSSMETRICS and OpenTTT work especially Matthew Aslett Matt Asay Tarus Balog Pamela Jones and many others with which I had the pleasure to exchange views with I received many invaluable suggestions and one of the most common one was to have a small summary of the posted research as a landing page So here is a synthesis of the previous research posts Why use OSS in product development a set of examples from a thesis by Erkko Anttila Open Source Software and Impact on Competitiveness Case Study from Helsinki University of Technology that provided hard data on the different hybrid community company approaches by Nokia and Apple and the relative gains and advantages The dynamics of OSS adoption 1 an initial view on the different dynamics behind open source adoption starting with diffusion processes Some data was also presented on unconstrained monetization On business models and their relevance A follow up post on work by Matthew Aslett introducing my view that future OSS business models will see more industry consortia and specialists as more and more groups start to take advantage of the collaborative model and will need more coordination on how to contribute back Transparency and dependability for external partners Outlining the transparency advantages of most OSS projects with two examples mentioned Zimbra and Alfresco and the added advantage for partners that can synchronize their work with that of the OSS community The dynamics of OSS adoptions II diffusion processes A presentation of diffusion processes as one of the models in OSS adoption and a presentation of the UTAUT model for estimating the degree of acceptance of OSS From theory to practice the personal desktop linux experiment A long example on how to apply the previously discussed models in a theoretical exercise creating an end user large scale linux PC for personal activities The post was inspired by work done during the Manila workshop along with UN s International Open Source Network for facilitating take up of open source by south east Asean SMEs Rethinking OSS business model classifications by adding adopters value A presentation of the new classification of OSS business models I have to thank Matthew Aslett of the 451 group for the many comments and for accepting to share his work from the CAOS report with us Comparing companies effectiveness a response to Savio Rodrigues A post written in response to work by Savio Rodrigues on the relative shares of R D of OSS companies compared to traditional IT companies Our definitions of OSS based business models A follow up of the rethinking post it outlines the new definitions of OSS business models created for the final part of the FLOSSMETRICS project Another take on the financial value of open source Our estimates of the value of the open source software market and a call for further research on non code contributions OSS based business models a revised study based on 218 companies A post providing the summary of the extended FLOSSMETRICS study on open source companies that increased its number from 80 to 218 with some observation on relative size and usage of the various models Estimating savings from OSS code reuse or where does the money comes from One of my favourite posts provides a long discussion of the savings obtained when using OSS inside of other products with some additional data obtained through COCOMO modeling Another data point on OSS efficiency A short post focusing on data from the italian TEDIS research that showed how OSS companies are on average more capable to take on larger customers when compared with benchmark IT companies of the same size The new FLOSSMETRICS project liveliness parameters Fresh from the other project researchers I provided a list of the new project liveness parameters that will be used in the SME guide Reliability of open source from a software engineering point of view A post that presents some results on how open source tends to be of higher quality under specific circumstances and a follow up idea on how this may be due to basic software engineering facts related to component reuse Open source and certified systems A post inspired by a recent white paper on e voting the post presents my views on high integrity and life critical open source systems 2 Comments Open source and certified systems Posted by cdaffara in OSS business models OSS data on April 16th 2009 A recent white paper published by the Election Technology Council an industry trade association representing providers for over 90 of the voting systems used in the United States analyses the potential role of open source software in voting systems concludes that it is premature Given the economic dynamics of the marketplace state and federal governments should not adopt unfair competitive practices which show preferential treatment towards open source platforms over proprietary ones Legislators who adopt policies that require open source products or offer incentives to open source providers will likely fall victim to a perception of instituting unfair market practices where do I have heard this curious sometimes the deja vu feeling The white paper however does contain some concepts that I have found over and over the result of mixing the legal perspective of OSS the license on which the software is released with the technical aspects the collaborative development model arriving at some false conclusions that are unfortunately shared by many others For this reason I would like to add my perspective on the issue of certified source code and OSS First of all there is no causal relation between the license aspect and the quality of the code or its certifiability It is highly ironic that the e voting companies are complaining of the fact that OSS may be potentially not tested enough for critical environments like voting given the results of some testing on their own software systems the implementation of cryptographic protection is flawed this key is hard coded into the source code for the AV TSx which is poor security practice because among other things it means the same key is used in every such machine in the U S and can be found through Google The result is that in any jurisdiction that uses the default keys rather than creating new ones the digital signatures provide no protection at all No use of high assurance development methods The AccuBasic interpreter does not appear to have been written using high assurance development methodologies It seems to have been written according to ordinary commercial practices Clearly there are serious security flaws in current state of the AV OS and AV TSx software source Security Analysis of the Diebold AccuBasic Interpreter Wagner Jefferson Bishop Of course there are many other reports and news pieces on the general unreliability of the certified GEMS software just to pick the most talked about component The fact is that assurance and certification is a non functional aspect that is unrelated to the license the software is released with as certifications of software quality and adherence to high integrity standards are based on design documents the adherence to development standards testing procedures and much more but not licensing I have already written about our research on open source quality from the software engineering point of view and in general it can be observed that open source development models tend to have an higher improvement in quality within a specific time frame when compared to proprietary software systems under specific circumstances like a healthy contributor community It is possible to certify open source systems under the strictest certification rules like the SABI secret and below certification medical CCHIT encryption FIPS standard common criteria Evaluation Assurance Level EAL4 and in one case meet or exceed EAL5 civil engineering where the product is used for the stability computations for EDF nuclear plants designs avionics and ground based high integrity systems like air traffic control and railrway systems we explored the procedures for achieving certified status for pre existing open source code in the CALIBRE project Thus it is possible to meet and exceed the regulatory rules for a wide spectrum of environments with far more stringent specifications than the current e voting environment It seems that the real problem lies in the potential for competition from OSS voting systems over proprietary ones Legislators who adopt policies that require open source products or offer incentives to open source providers will likely fall victim to a perception of instituting unfair market practices At worst policy makers may find themselves encouraging the use of products that do not exist and market conditions that cannot support competition The reality is that there are some open source voting software the white paper even lists some and the real threat is the government to start funding those projects instead of buying proprietary combinations This is where the vendors clearly show the underlying misunderstanding on how open source works you can still sell your assembly of hardware and software as with EAL it is the combination of both that is certified not the software in isolation and continue the current business model It is doubtful that the open source community as mentioned in the paper will ever certify the code as it is a costly and substantial effort exactly like no individual applied to EAL4 certification for Linux that requires a substantial amount of money The various vendors would probably do something better if they started a collaborative effort for a minimum denominator system to be used as a basis for their system in a way similar to that performed by mobile phone companies in the LiMo and Android projects or through industry consortia like Eclipse They could still be introducing differentiating aspects in the hardware and upper layer software while reducing the costs of R D and improving the transparency of a critical component of our modern democracies No Comments MXM patents and licenses clarity is all it takes Posted by cdaffara in OSS business models OSS data blog on April 10th 2009 Recently on the OSI mailing list Carlo Piana wrote a proposed license for the reference implementation of the ISO IEC 23006 MPEG eXtensible Middleware MXM The license is derived from the MPL with the removal of some of the patent conditions from the text of the original license and clearly creates a legal boundary conditions that grants patent rights only for those who compile it only for internal purposes without direct commercial exploitation I tend to agree on Carlo s comment My final conclusion is that if the BSD family is considered compliant so shall be the MXM as it does not condition the copyright grant to the obtaining of the patents just as the BSD licenses don t deal with them And insofar an implementer is confident that the part of the code it uses if free from the patented area or it decided to later challenge the patent in case an infringement litigation is threatened the license works just fine as a side note I am completely and totally against software patents and I am confident that Carlo Piana is absolutely against them as well Having worked in the italian ISO JTC1 chapter I also totally agree with one point the sad truth is that if we did not offer a patent agnostic license we would have made all efforts to have an open source reference implementation moot Unfortunately ISO still believes that patents are something that is necessary to convince companies to participate in standard groups despite the existence of standard groups that do work very well without this policy my belief is that the added value of standardization in terms of cost reductions are well worth the cost of participating in the creation of complex standards like MPEG but this is for another post What I would like to make clear is that the real point is not if the proposed MXM license is OSI compliant or not the important point is why you want it to be open source Let s consider the various alternatives the group believes that an open source implementation may receive external effort much like the traditional open source projects and thus reduce maintenance and extension effort If this is the aim then the probability of having this kind of external support is quite low as companies would avoid it as the license would not allow in any case a commercial use with an associated patent license and researchers working in the area would have been perfectly satisfied with any kind of academic or research only license the group wants to increase the adoption of the standard and the reference implementation should be used as a basis for further work to turn it into a commercial product This falls in the same cathegory as before why should I look at the reference implementation if it does not grant me any potential use The group could have simply published the source code for the reference and said if you want to use it you should pay us a license for the embedded patents the group wants to have a golden standard to benchmark external implementations for example to see that the bitstreams are compliant Again there is no need for having an open source license The reality is that there is no clear motivation behind making this under an open source license because the clear presence of patents on the implementation makes it risky or non free to use for any commercial exploitation Microsoft for example did it much better to avoid losing their rights to enforce their patents they paid or supported other companies to create a patent covered software and released it under an open source license Since the secondary companies do not hold any patent with the releasing of the code they are not relieving any threat from the original Microsoft IPR and at the same time they use a perfectly acceptable OSI approved license As the purpose of the group is twofold increase adoption of the standards make commercial user pay for the IPR licensing I would propose a different alternative since the real purpose is to get paid for the patents or to be able to enforce them in case of commercial competitors why don t you dual license it with the strongest copyleft license available at the moment the AGPL This way any competitor would be forced to be fully AGPL and so any improvement would have to be shared exchanging the lost licensing revenue for the maintenance cost reduction or to pay for the license turning everything into the traditional IPR licensing scheme I know I know this is wishful thinking Carlo I understand your difficult role 2 Comments Another hypocrite post Open Source After Jacobsen v Katzer Posted by cdaffara in OSS business models OSS data divertissements on April 8th 2009 The reality is that I am unable to resist To see a post containing idiotic comments on open source masqueraded as a serious article makes me start giggling with I have to write them something my coworkers are used to it they sometimes comment with another post is arriving or something more humorous The post of today is a nicely written essay from Jonathan Moskin Howard Wettan and Adam Turkelon Law com with the title Open Source After Jacobsen v Katzer referring to a recent US Federal Circuit decision The main point of the ruling is the Federal Circuit s recognition that the terms in an open source license can create enforceable conditions to use of copyrighted materials that is the fact that software licenses in this case the Artistic License that limit redistribution are enforceable Not only this but the fact that the enforceability is also transferable because Jacobsen confirmed that a licensee can be liable for copyright infringement for violating the conditions of an open source license the original copyright owner may now have standing to sue all downstream licensees for copyright infringement even absent direct contractual privity This is the starting point for a funny tirade like Before Jacobsen v Katzer commercial software developers often avoided incorporating open source components in their offerings for fear of being stripped of ownership rights Following Jacobsen commercial software developers should be even more cautious the article headline in the Law com front page to It is perhaps also the most feared for its requirement that any source code compiled with any GPL licensed source code be publicly disclosed upon distribution often referred to as infection emphasis mine Infection And the closing points Before Jacobsen v Katzer commercial software developers already often avoided incorporating open source components in their offerings for fear of being stripped of ownership rights While software development benefits from peer review and transparency of process facilitated by open source the resulting licenses by their terms could require those using any open source code to disclose all associated source code and distribute incorporated works royalty free Following Jacobsen v Katzer commercial software developers should be even more cautious of incorporating any open source code in their offerings Potentially far greater monetary remedies not to mention continued availability of equitable relief make this vehicle one train to board with caution Let s skip the fact that the law practitioners that wrote this jewel of law journalism are part of the firm White Case that represented Microsoft in the EU Commission s first antitrust action let s skip the fact that terms like infection and the liberal use of commercial hides the same error already presented in other pearls of legal wisdom already debated here the reality is that the entire frame of reference is based on an assumption that I heard the first time from a lawyer working for a quite large firm that since open source software is free companies are entitled to do whatever they want with it Of course it s a simplification I know many lawyers and paralegals that are incredibly smart Carlo Piana comes to mind but to this people I propose the following gedankenexperiment imagine that within the text of the linked article every mention to open source was magically replaced with proprietary source code The federal circuit ruling would more or less stay unmodified but the comment of the writers would assume quite hysterical properties Because they would argue that proprietary software is extremely dangerous because if Microsoft just as an example found parts of its source code included inside of another product they would sue the hell out of the poor developer that would be unable to use the Cisco defence to claim that Open Source crept into its products and thus damages should be minimal The reality is that the entire article is written with a focus that is non differentiating in this sense there is no difference between OSS and proprietary code Exactly like for proprietary software taking open source code without respecting the license is not allowed the RIAA would say that it is stealing and that the company is a pirate So dear customers of White Case stay away from open source at all costs while we will continue to reap its benefits 5 Comments See you in Brussels the European OpenClinica meeting Posted by cdaffara in OSS business models OSS data blog on April 8th 2009 In a few days the 14th of April I will be attending as a panelist the first European OpenClinica meeting in the regulatory considerations panel It will be a wonderful opportunity to meet all the other OpenClinica users and developers and in general talk and share experiences As I will stay there for the evening I would love to invite all friends and open source enthusiasts that happen to be in Brussels that night for a chat and a Belgian beer As for those that are not aware of OpenClinica it is a shining example of open source software for health care it is a Java based server system that allows to create secure web forms for clinical data acquisition and much more The OpenClinica software platform supports clinical data submission validation and annotation data filtering and extraction study auditing de identification of Protected Health Information PHI and much more It is distributed under the LGPL and does have some really nice features like the design of forms using spreadsheets extremely intuitive We have used it in several regional and national trials and even trialed it as a mobile data acquisition platform If you can t be in Brussels but are interested in open source health care check out OpenClinica 2 Comments Reliability of open source from a software engineering point of view Posted by cdaffara in OSS business models OSS data on April 6th 2009 At the Philly ETE conference Michael Tiemann presented some interesting facts about open source quality and in particular mentioned that open source software has an average defect density that is 50 150 times lower than proprietary software As it stands this statement is somewhat incorrect and I would like to provide a small clarification of the context and the real values First of all the average that is mentioned by Michael is related to a small number of projects in particular the Linux kernel the Apache web server and later the entire LAMP stack and a small number of additional famous projects For all of these projects the reality is that the defect density is substantially lower than that of comparable proprietary products A very good article on this is Succi Paulson Eberlein An Empirical Study of Open Source and Closed Source Software Products IEEE TRANSACTIONS ON SOFTWARE ENGINEERING V 30 4 april 2004 where the study was performed It was not the only study on the subject but all pointed at more or less the same results Other than the software engineering community some results from companies working in the code defect identification industry also published some results like Reasoning Inc A Quantitative Analysis of TCP IP Implementations in Commercial Software and in the Linux Kernel and How Open Source and Commercial Software Compare Database Implementations in Commercial Software and in MySQL All results confirm the much higher quality in terms of defect per line of code of the academic research Additional research identified a common pattern the initial quality of the source code is roughly the same for proprietary and open source but the defect density decreases in a much faster way with open source So it s not the fact that OSS coders are on average code wonders but that the process itself creates more opportunity for defect resolution on average As Succi et al pointed out In terms of defects our analysis finds that the changing rate or the functions modified as a percentage of the total functions is higher in open source projects than in closed source projects This supports the hypothesis that defects may be found and fixed more quickly in open source projects than in closed source projects and may be an added benefit for using the open source development model emphasis mine I have a personal opinion on why this happens and is really related to two different phenomenons the first aspect is related to code reuse the general modularity and great reuse of components is in fact helping developers because instead of recoding something introducing new bugs the reuse of an already debugged component reduces the overall defect density This aspect was found in other research groups focusing on reuse for example in a work by Mohagheghi Conradi Killi and Schwarz called An Empirical Study of Software Reuse vs Defect Density and Stability available here we can find that reuse introduces a similar degree of improvement in the bug density and the trouble report numbers of code As it can be observed from the graph code originated from reuse has a significant higher quality compared to traditional code and the gap between the two grows with the size as expected from basic probabilistic models of defect generation and discovery The second aspect is that the fact that bug data is public allows a prioritization and a better coordination of developers on triaging and in general fixing things This explains why this faster improvement appears not only in code that is reused but in newly generated code as well the sum of the two effects explains the incredible difference in quality 50 150 times higher than any previous effort like formal methods automated code generation and so on And this quality differential can only grow with time leading to a long term push for proprietary vendor to include more and more open source code inside of their own products to reduce the growing effort of bug isolation and fixing 7 Comments Dissecting words for fun and profit or how to be a few years too late Posted by cdaffara in OSS business models OSS data divertissements on April 3rd 2009 So after finishing a substantial part of our work on FLOSSMETRICS yesterday I believe that I deserve some fun And I cannot ask more than a new flame inducing post from a patent attorney right here that claims that open source will destroy the software industry just waiting to be dissected and evaluated he may be right right Actually not but as I have to rest somehow between my research duties with the Commission I decided to prepare a response after all the writer is a fellow EE electrical engineer and so he will probably enjoy some response to his blog post Let s start by stating that the idea that OSS will destroy the software industry is not new after all it is one of the top 5 myths from Navica and while no one tried to say that in front of me I am sure that it was quite common a few years ago Along with the idea that software helps terrorists Now that foreign intelligence services and terrorists know that we plan to trust Linux to run some of our most advanced defense systems we must expect them to deploy spies to infiltrate Linux The risk is particularly acute since many Linux contributors are based in countries from which the U S would never purchase commercial defense software Some Linux providers even outsource their development to China and Russia from Green Hills Software CEO Dan O Dowd So let s read and think about what Gene Quinn writes It is difficult if not completely impossible to argue the fact that open source software solutions can reduce costs when compared with proprietary software solutions so I can completely understand why companies and governments who are cash starved would at least consider making a switch and who can fault them for actually making the switch Nice beginning quite common in debate strategy first concede something to the opponent Then use the opening to push something unrelated The question I have is whether this is in the long term best interest of the computing software industry What is happening is that open source solutions are forcing down pricing and the race to zero is on Here we take something that is acknowledge that OSS solutions are reducing costs thus creating a pressure on pricing and then we attach a second logically unconnected term the race to zero is on Who says that the reduction in pricing leads to a reduction to zero No one with an economics background The reality is that competition brings down prices theoretically in a perfectly competitive environment made of equal products bringing the price down to the marginal cost of production Which is of course not zero as any software company will happily tell you Because the cost of producing copies of software is very small but the cost of creating supporting maintaining documenting software is not zero This does not take into account the fact that some software companies enjoy profit margins unheard of and this explains why there is such a rush by users in at least experimenting with potentially cost saving measures a s zero is approached however less and less money will be available to be made proprietary software giants will long since gone belly up and leading open source companies such as Red Hat will not be able to compete Of course since zero is not approached the phrase is logically useless what is the color of my boat any as you like as I don t own one But let s split it in parts anyway of course if zero is approached software giants will go belly up But why RedHat will not be able to compete Compete with what If all proprietary companies will disappear and only OSS companies remains then the market actually increases even with increasingly small revenues the same effect that can be witnessed in some mobile data markets with the reduction in price of SMS you see an increase in the number of messages sent resulting in an increase in revenues It is quite possible that the open source movement will ultimately result in a collapse of the industry and that would not be a good thing Still following the hypothetical theory that software pricing will go to zero that as I said is not grounded in reality here the author takes the previous considerations and uses a logical trick he says that the proprietary companies will disappear here he says that there will be a collapse of the industry not of the proprietary industry This way he collapses the concept of the software industry that includes the proprietary and the non proprietary actors and conveniently avoids the non proprietary part Of course this is still not grounded in anything logical The conclusion is obvious that would not be a good thing Of course this is another rhetoric form by adding a grounding in something that is emotionally or ethically based we introduce an external negative perception in the reader strengthening what is still an hypothesis And then the avoidance trap I am sure that many open source advocates who are reading this are already irate and perhaps even yelling that this Quinn guy doesn t know what he is talking about I am used to it by now I get it all the time It is after all much easier to simply believe that someone you disagree with is clueless rather than question your own beliefs This approach is so commonly used that is now beginning to show its age use the fact that someone may be irate at reading the article to dismiss all critics as clueless people unable to question beliefs The use of this word is another standard tactics simply removing the idea that the personal position of an OSS adopter depends on illogic faith based assumptions this of course would be difficult to defend in an academic environment where we assume that researchers are not faith based in their studies So this is an approach commonly used in online forum blogs and such that are meant for a general audience It is a mistake though to dismiss what I am saying here or any of my other writings on computer software and open source Of course I am dismissing it for the content of what you write not because of my beliefs and I have not read anything else from you so I am not dismissing what I have not read The fact that I am a patent attorney undoubtedly makes many in the open source movement immediately think I simply don t understand technology and my writings that state computer software is not math have only caused mathematicians and computer scientists to believe I am a quack This is totally unrelated to the previous arguments who was talking of software patents anyway We were talking about the role of OSS in terms of competition with the proprietary software market and about potential effects to revenues U nlike most patent

    Original URL path: http://carlodaffara.conecta.it/2009/04/index.html (2016-02-18)
    Open archived version from archive

  • March 2009 Archives « carlodaffara.conecta.itcarlodaffara.conecta.it
    years avg staffing 100000 0 1703 0 1 7 20 5 100000 50 975 43 1 3 15 4 100000 75 487 71 0 9 8 6 1000000 0 22000 0 3 3 141 7 1000000 50 12061 45 2 6 103 2 1000000 75 3012 86 2 32 10000000 0 295955 0 7 5 818 10000000 50 160596 46 5 9 631 2 10000000 75 80845 73 3 8 421 In the case of 10Mlines of code the saving is estimated at more than 210M that is consistent with previous estimates of savings by Nokia in reusing open source within Maemo Even for the small project of 100000 lines the savings are estimated at 1 2M Another interesting aspect is related to staffing and time not only the use of OSS can reduce development time substantially but it allows for a substantial reduction in the amount of staff necessary for the development In the smallest example 100000 lines of code still substantial the average staffing is reduced from more than 20 developers to slightly less than 9 bringing this project within reach even by small companies and in my personal view it explains the exceptional take up of OSS by new and innovative companies that even before external sources of capital like VCs are capable of creating non trivial projects with very limited resources 10 Comments OSS based business models a revised study based on 218 companies Posted by cdaffara in OSS business models OSS data on March 16th 2009 After the publication of our revised OSS business model taxonomy we have finally finished the second survey from FLOSSMETRICS that now covers 218 companies again excluding companies that have less than 30 of revenues coming from OSS and excluding companies that sell a proprietary product based on OSS as there are simply too many of them Some companies are using more than one model for example many dual licensing companies also sell services and support and thus can be classified also as product specialists With the actual numbers Model name companies product specialist 131 open core 52 Indirect 44 dual licensing 19 R D sharing 6 training 5 aggregate supp 5 legal cert 5 platform providers 4 selection consulting 4 Some important considerations product specialists are counted only when there is a demonstrable participation of the company into the project as main committer otherwise the number of specialists would be much greater as some projects are the center of commercial support from many companies a good example is OpenBravo or Zope The distribution of revenue approximate as most companies are not publishing revenue data seems to match that of average IT sector with the vast majority of companies of small size less than 5M around 10 are medium sized 5 to 20M and very few can be classified as large Another observation is the fact that platform providers while limited in number tend to have a much larger revenue rate than both specialists or open core companies Overall there seems to be no significant difference in revenuse comparing same class companies between product specialists compared to open core companies but this is based on uncertain estimates of relative revenues and should be taken as purely speculative What seems to be constant is the reported increase in visibility and sales leads experienced by companies that adopted a pure open source model be it dual licensing specialists or based on indirect revenues as before it is possible to check this kind of increase only through web based metrics that are in many cases unreliable and by indirect measurements like user participation in forums or dedicated conferences 4 Comments Another take on the financial value of open source Posted by cdaffara in OSS business models OSS data on March 13th 2009 The real value of open source in financial terms has been one of my favourite arguments and I had the opportunity to research in this area for a long time thanks to our work for the customers for which we provide consulting on OSS business models I recently found a new post on this by Bruno Von Rotz with some preliminary conclusions like total value of USD 100 to 150 billion I would assume that some of it comes from enterprises and large organizations but this is probably max 20 30 of it I would like to provide first of all a validation of the first part and a comment on the second As for the software market Gartner published in 2006 a very well done study on OSS in the context of the overall software market and among the various results there is one that I found quite interesting Considering the fact that some parallel data points like results from the OECD estimates on software market more or less confirm the predictions from Gartner we can say that OSS has a financial value of 120B now and will reach 150B in 2010 perfectly in line with the predictions from Bruno What I am not convinced is the calculation of the share 0f voluntary contributions Vs company contributions If Gartner data is accurate and I believe that this is the case we can expect that companies should contribute between 40 and 50 of value to a project and this is somewhat consistent with projects like Linux or Eclipse where there is a large ecosystem not only of adopters but of commercial companies working on top and where company contributions are in that range In this sense I believe the 20 30 percentage mentioned by Bruno to be too restrictive the problem is that measuring code is not the only way to measure contributions I use frequently this as an example In the year 2000 fifty outside contributors to Open Cascade provided various kinds of assistance transferring software to other systems IRIX 64 bits Alpha OSF correcting defects memory leaks and translating the tutorial into Spanish etc Currently there are seventy active contributors and the objective is to reach one hundred These outside contributions are significant Open Cascade estimates that they represent about 20 of the value of the software Do these contributions appear as source code No exactly as localization efforts for OpenOffice or KDE do not appear in source code metrics My belief is that the value of OSS right now is even much larger than 120B and that we have simply no way to measure this hidden value but it s there 2 Comments Our definitions of OSS based business models Posted by cdaffara in OSS business models OSS data on March 13th 2009 Two days ago Matthew Aslett was so kind to comment on our new research on OSS business models especially our new taxonomy based on our latest extensions of the FLOSSMETRICS methodology and the increase in the number of surveyed companies We both share the interest in having a clear simple and usable set of definitions to avoid confusion when referring to specific business models and he decided to publish the previously private set of definitions that were adopted in the CAOS report Open source is not a business model As I would like to find a converging set of definitions I will publish here a pre release of our next edition of the OSS guide that covers in more detail what Lampitt classify as vendor licensing strategy My hope is to see if it is possible to find an agreed to definition that can be shared by researchers and experts As already mentioned in my previous post we introduced several changes compared to the previous edition It does for the first time disaggregate what was originally called ITSC Installation training support consulting because many successful companies are now specializing in a single activity This is a significant change from 2006 when we started collecting data on OSS models when companies were performing in a more or less undifferentiated way all those activities We believe that this specialization will continue with the enlargement of the commercial OSS market We removed the badgeware category from the list We found that some of the vendors that originally followed this model disappeared and for those remaining protection from freeriding and overall model was more or less morphed into a open core or split oss commercial As the visibility clause can now be included in the GPLv3 I believe that the remaining few badgeware licenses will disappear quickly As for our missing model proprietary built on open we found that basically the majority of products use OSS inside so we are waiting to finish our second strand of research on how to separate those products that are entirely OSS based from those that merely use OSS our current idea is related to the substitution principle is it possible to market the same product when all OSS components are substituted by non OSS ones Does it falls in the same market or it becomes something radically different This is a common theme that tries to answer questions like would it have been possible for Google to be based on proprietary software Taking this into consideration here is our new taxonomy Dual licensing the same software code distributed under the GPL and a commercial license This model is mainly used by producers of developer oriented tools and software and works thanks to the strong coupling clause of the GPL that requires derivative works or software directly linked to be covered under the same license Companies not willing to release their own software under the GPL can buy a commercial license that is in a sense an exception to the binding clause by those that value the free as in speech idea of free libre software this is seen as a good compromise between helping those that abide to the GPL and receive the software for free and make their software available as FLOSS and benefiting through the commercial license for those that want to maintain the code proprietary The downside of dual licensing is that external contributors must accept the same licensing regime and this has been shown to reduce the volume of external contributions that becomes mainly limited to bug fixes and small additions Open Core previously called split OSS commercial this model distinguish between a basic FLOSS software and a commercial version based on the libre one but with the addition of proprietary plugins Most companies adopt as license the Mozilla Public License as it allows explicitly this form of intermixing and allows for much greater participation from external contributions as no acceptance of double licensing is required The model has the intrinsic downside that the FLOSS product must be valuable to be attractive for the users but must also be not complete enough to prevent competition with the commercial one This balance is difficult to achieve and maintain over time also if the software is of large interest developers may try to complete the missing functionality in a purely open source way thus reducing the attractiveness of the commercial version Product specialist s companies that created or maintain a specific software project and use a pure FLOSS license to distribute it The main revenues are provided from services like training and consulting the ITSC class and follow the original best code here and best knowledge here of the original EUWG classification It is based on the assumption commonly held that the most knowledgeable experts on a software are those that have developed it and this way can provide services with a limited marketing effort by leveraging the free redistribution of the code The downside of the model is that there is a limited barrier of entry for potential competitors as the only investment that is needed is in the acquisition of specific skills and expertise on the software itself Platform providers companies that provide selection support integration and services on a set of projects collectively forming a tested and verified platform In this sense even Linux distributions were classified as platforms the interesting observation is that those distributions are licensed for a significant part under pure FLOSS licenses to maximize external contributions and leverage copyright protection to prevent outright copying but not cloning the removal of copyrighted material like logos and trademark to create a new product The main value proposition comes in the form of guaranteed quality stability and reliability and the certainty of support for business critical applications Selection consulting companies companies in this class are not strictly developers but provide consulting and selection evaluation services on a wide range of project in a way that is close to the analyst role These companies tend to have very limited impact on the FLOSS communities as the evaluation results and the evaluation process are usually a proprietary asset Aggregate support providers companies that provide a one stop support on several separate OSS products usually by directly employing developers or forwarding support requests to second stage product specialists Legal certification and consulting these companies do not provide any specific code activity but provide support in checking license compliance sometimes also providing coverage and insurance for legal attacks some companies employ tools for verify that code is not improperly reused across company boundaries or in an improper way Training and documentation companies that offer courses online and physical training additional documentation or manuals This is usually offered as part of a support contract but recently several large scale training center networks started offering OSS specific courses R D cost sharing A company or organization may need a new or improved version of a software package and fund some consultant or software manufacturer to do the work Later on the resulting software is redistributed as open source to take advantage of the large pool of skilled developers who can debug and improve it A good example is the Maemo platform used by Nokia in its Mobile Internet Devices like the N810 within Maemo only 7 5 of the code is proprietary with a reduction in costs estimated in 228M and a reduction in time to market of one year Another example is the Eclipse ecosystem an integrated development environment IDE originally open sourced by IBM and later managed by the Eclipse Foundation Many companies adopted Eclipse as a basis for their own product and this way reduced the overall cost of creating a software product that provides in some way developer oriented functionalities There is a large number of companies universities and individual that participate in the Eclipse ecosystem as an example As recently measured IBM contributes for around 46 of the project with individuals accounting for 25 and a large number of companies like Oracle Borland Actuate and many others with percentages that go from 1 to 7 This is similar to the results obtained from analysis of the Linux kernel and show that when there is an healthy and large ecosystem the shared work reduces engineering cost significantly Gosh estimates that it is possible to obtain savings in terms of software research and development of 36 through the use of FLOSS this is in itself the largest actual market for FLOSS as demonstrated by the fact that the majority of developers are using at least some open source software within their own code Indirect revenues A company may decide to fund open source software projects if those projects can create a significant revenue source for related products not directly connected with source code or software One of the most common cases is the writing of software needed to run hardware for instance operating system drivers for specific hardware In fact many hardware manufacturers are already distributing gratis software drivers Some of them are already distributing some of their drivers specially those for the Linux kernel as open source software The loss leader is a traditional commercial model common also outside of the world of software in this model effort is invested in an open source project to create or extend another market under different conditions For example hardware vendors invest in the development of software drivers for open source operating systems like Linux to extend the market of the hardware itself Other ancillary models are for example those of the Mozilla foundation that obtains a non trivial amount of money from a search engine partnership with Google an estimated 72M in 2006 while SourceForge OSTG receives the majority of revenues from ecommerce sales of the affiliate ThinkGeek site At the moment there is no significant model with companies more or less adopting and changing model depending on the specific market or the shifting costs For example during the last few years a large number of companies shifted from an open core model to a pure product specialist one to leverage the external community of contributors Many researchers are trying to identify whether there is a more efficient model among all those surveyed what we found is that the most probable future outcome will be a continuous shift across model with a long term consolidation of development consortia like Symbian and Eclipse that provide strong legal infrastructure and development advantages and product specialists that provide vertical offerings for specific markets 5 Comments Comparing companies effectiveness a response to Savio Rodrigues Posted by cdaffara in OSS business models OSS data Uncategorized on March 9th 2009 I was intrigued by a twit from Stéfane Fermigier Comparing only 1 oss vendor RHAT and 1 proprietary monopolistic one MSFT is really a deep piece of economic science with a link to this article by long time OSS debater supporter critic fellow Savio Rodrigues that compares the financial breakdown of RedHat and Microsoft and concludes that the commonly held hypothesis that open source gives a capital advantage by providing savings on R D is not true In particular The argument is that commercial vendors spend on items such as advertising marketing R D and most importantly expensive direct sales representatives We re told that open source vendors spend significantly less on these items and hence can be more capital efficient These costs make up the

    Original URL path: http://carlodaffara.conecta.it/2009/03/index.html (2016-02-18)
    Open archived version from archive

  • February 2009 Archives « carlodaffara.conecta.itcarlodaffara.conecta.it
    in January we the Windows dev team were receiving one Send Feedback report every 15 seconds for an entire week and to date we ve received well over 500 000 of these reports Microsoft has fixes in the pipeline for nearly 2 000 bugs in Windows code not in third party drivers or applications that caused crashes or hangs That s great Microsoft is getting a lot of feedback about Windows 7 What kind of feedback are testers getting from the team in return Very little I get lots of e mail from testers asking me whether Microsoft has fixed specific bugs that have been reported on various comment boards and Web sites I have no idea and neither do they emphasis mine Open source if well managed is radically different I had a conversation with a customer just a few minutes ago asking for specifics on a bug encountered in Zimbra answered simply by forwarding the link to the Zimbra dashboard Not to be outdone Alfresco has a similar openness Or one of my favourite examples OpenBravo Transparency pays becuase it provides a direct handle on development and provides a feedback channel for the eventual network of partners or consultancies that are living off an open source product This kind of transparency is becoming more and more important in our IT landscape because time constraints and visibility of development are becoming even more important than pure monetary considerations and allows for adopters to eventually plan for alternative solutions depending on the individual risks and effort estimates 1 Comment On business models and their relevance Posted by cdaffara in OSS business models OSS data on February 24th 2009 Matthew Aslett has a fantastic summary post that provides a sort of synthesis of some of the previous debates on what is an OSS business model and how this model impacts the performance of a company along with the usual sensible comments There are a few points that I would like to make It is probably true that a pure service based company is less interesting for VC looking for an equity investment by service based I mean Product specialists companies that created or maintain a specific software project and use a pure FLOSS license to distribute it The main revenues are provided from services like training and consulting from the FLOSSMETRICS guide Every service based model of this kind is limited by the high percentage of non repeatable work that should be done by humans so the profit margins are lower than those of the average software industry or of other OSS models On the other hand unconstrained distribution facilitated by the clear unambiguous model and single license in many cases compensates for this lower margin by increasing the effectiveness of marketing messages Tarus Balog notes For those companies trying to make billions of dollars on software quickly the only way to do that in today s market is with the hybrid model where much of the revenue comes from closed software licenses That s right at the moment this seems the only possible road to a 1B company What I am not convinced of is that this is in itself such a significant goal after all the importance of being big is related to the fact that bigger companies have the capability of creating more complex solutions or to be capable of servicing customers across the globe But in OSS complex solutions can be created by engineering several separate components reducing the need of a larger entity creating things from scratch and cooperation between companies in different geographical areas may provide a reasonable offering with a much smaller overhead the bigger the company the less is spent in real R D and support A smaller but not small company may still be able to provide excellent quality and stability with a more efficient process that translates into more value for dollar for the customer I believe that in the long term the market equilibrium will be based on a set of service based companies providing high specialization and development consortia providing core economies of scale After all there is a strong economic incentive to move development outside of companies and in reduce coding effort through reuse Here is an example from the Nokia Maemo platform In this slide from Erkko Anttila s thesis more data in this previous post it is possible to see how development effort and cost was shifted from the beginning of the project to the end The real value comes from being able to concentrate on differentiating user centered applications those can be still developed in a closed way if the company believes that this gives them greater value but the infrastructure and the 80 of non differentiating software expenditure can be delivered at a much lower price point if developed in a shared way Development consortia like the Eclipse consortium can act as a liasion clearing office for external contributions simplifying the process of contribution from companies The combination of visibility and clear contribution processes can help companies in the shift from shy participants that prefer to have individual developers commit changes to projects thus relieving the company from any liability but still reaping the advantages of participation to contribution and championing 3 Comments The dynamics of OSS adoption 1 Posted by cdaffara in OSS business models OSS data on February 24th 2009 There are many different mechanisms behind OSS adoption and understanding the differences makes it easier to help companies in using them efficiently after all word of mouth may be sufficient to get visibility but it may be not enough to guarantee adoption and then converting this adoption into paid services In fact monetization may require a large number of adopters to get a small percentage of paid users in many domains only 0 05 of adopters pays for services a percentage that we call unconstrained monetization percentage or UMP to make it sound more academic While it is true that the incremental

    Original URL path: http://carlodaffara.conecta.it/2009/02/index.html (2016-02-18)
    Open archived version from archive

  • Comments for carlodaffara.conecta.it
    source from a software engineering point of view comment page 1 comment 115649 the lower bound of savings that OSS does bring to Europe is 116B carlodaffara conecta it Mon 23 Jul 2012 10 03 35 0000 http carlodaffara conecta it p 180 comment 115649 quality when compared with equivalent proprietary code data and academic references available here but I will leave this kind of evaluation for a future article We can however say with quite a quality when compared with equivalent proprietary code data and academic references available here but I will leave this kind of evaluation for a future article We can however say with quite a Comment on Estimating savings from OSS code reuse or where does the money comes from by the lower bound of savings that OSS does bring to Europe is 116B carlodaffara conecta it http carlodaffara conecta it estimating savings from oss code reuse or where does the money comes from comment page 1 comment 115635 the lower bound of savings that OSS does bring to Europe is 116B carlodaffara conecta it Mon 23 Jul 2012 09 06 23 0000 http carlodaffara conecta it p 135 comment 115635 these costs thanks to the effort of the software engineering community some details can be found here for those that really really want to be put to these costs thanks to the effort of the software engineering community some details can be found here for those that really really want to be put to Comment on DoD OSCMIS a great beginning of a new OSS project by Delmar Kerr http carlodaffara conecta it dod oscmis a great beginning of a new oss project comment page 1 comment 109454 Delmar Kerr Tue 26 Jun 2012 16 28 49 0000 http carlodaffara conecta it p 316 comment 109454 blockquote cite commentbody 1679 strong a href comment 1679 rel nofollow Kris Buytaert a strong From their site To request a copy of the Open Source Corporate Management Information System suite under the terms of the Open Software License v 3 0 please click here to download a copy of the OSCMIS OSL v 3 0 license and then forward a signed copy of the license to OSSI either via email fax or postal mail Appropriate addresses below Seems like they have a very very very long way to go blockquote Kris Buytaert From their site To request a copy of the Open Source Corporate Management Information System suite under the terms of the Open Software License v 3 0 please click here to download a copy of the OSCMIS OSL v 3 0 license and then forward a signed copy of the license to OSSI either via email fax or postal mail Appropriate addresses below Seems like they have a very very very long way to go Comment on EveryDesk is a finalist of the OpenWorldForum demo cup by The evolution of CloudWeavers from desktops to clouds www cloudweavers it http carlodaffara conecta it everydesk is a finalist

    Original URL path: http://carlodaffara.conecta.it/comments/feed/index.html (2016-02-18)
    Open archived version from archive

  • cdaffara « carlodaffara.conecta.itcarlodaffara.conecta.it
    in blog divertissements 4 Comments The neverending quest to prove Google evilness Why Tuesday March 22nd 2011 Tags android droid fosspatents gpl gpl laundering open source Posted in blog divertissements 1 Comment App stores have no place in a web apps world Monday February 14th 2011 Tags app stores open source web apps Posted in divertissements 6 Comments On WebM again freedom quality patents Wednesday January 19th 2011 Posted in Uncategorized 40 Comments ChromeOS is not for consumers Monday December 13th 2010 Posted in Uncategorized 6 Comments OSS is about access to the code Sunday December 12th 2010 Tags open source Posted in divertissements 5 Comments No Microsoft you still don t get it Monday December 6th 2010 Posted in Uncategorized 6 Comments How to make yourself hated by academics Friday November 5th 2010 Tags open source OSS adoption OSS business models Posted in divertissements 3 Comments Older Entries Newer Entries blog divertissements EveryDesk OSS adoption OSS business models OSS data Uncategorized April 2015 M T W T F S S Aug 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

    Original URL path: http://carlodaffara.conecta.it/author/admin/page/2/index.html (2016-02-18)
    Open archived version from archive

  • Nokia is one of the most active Android contributors, and other surprises « carlodaffara.conecta.itcarlodaffara.conecta.it
    S60 platform for mobile devices The S60 port exists in a branch of the public WebKit repository along with various changes to better support mobile devices To date it has not been merged to the mainline However a few changes did make it in including support for CSS queries In 2008 Nokia acquired Trolltech Trolltech has an extensive history of WebKit contributions most notably the Qt port Google Google employees have contributed code to WebKit as part of work on Chrome and Android both originally secret projects This has included work on portability bug fixes security improvements and various other contributions Torch Mobile Torch Mobile uses WebKit in the Iris Browser and has contributed significantly to WebKit along the way This has included portability work bug fixes and improvements to better support mobile devices Torch Mobile has ported WebKit to Windows CE Mobile other undisclosed platforms and maintains the QtWebKit git repository Several long time KHTML and WebKit contributors are employed by Torch Mobile Nuanti Nuanti engineers contribute to WebCore JavaScriptCore and in particular develop the WebKit GTK port This work includes porting to new mobile and embedded platforms addition of features and integration with mobile and desktop technologies in the GNOME stack Nuanti believes that working within the framework of the webkit org version control and bug tracking services is the best way of moving the project forward as a whole Igalia Igalia is a free software consultancy company employing several core developers of the GTK port with contributions including bugfixing performance accessibility API design and many major features It also provides various parts of the needed infrastructure for its day to day functioning and is involved in the spread of WebKit among its clients and in the GNOME ecosystem for example leading the transition of the Epiphany web browser to WebKit Company 100 Company 100 has contributed code to WebKit as part of work on Dorothy Browser since 2009 This work includes portability performance bug fixes improvements to support mobile and embedded devices Company 100 has ported WebKit to BREW MP and other mobile platforms University of Szeged The Department of Software Engineering at the University of Szeged Hungary started to work on WebKit in mid 2008 The first major contribution was the ARMv5 port of the JavaScript JIT engine Since then several other areas of WebKit have been tackled memory allocation parsers regular expressions SVG Currently the Department is maintaining the official Qt build bots and the Qt early warning system Samsung Samsung has contributed code to WebKit EFL Enlightenment Foundation Libraries especially in the area of bug fixes HTML5 EFL WebView etc Samsung is maintaining the official Efl build bots and the EFL early warning system So we see fierce competitors Apple Nokia Google Samsung co operating in a project that is clearly of interest for all of them In a previous post I made a similar analysis for IGEL popular developers of thin clients and HP Palm The actual results are Total published source code

    Original URL path: http://carlodaffara.conecta.it/nokia-is-one-of-the-most-active-android-contributors-and-other-surprises/index.html (2016-02-18)
    Open archived version from archive

  • WebP is an effective low-bitrate encoder, better than jpegcarlodaffara.conecta.it
    see if the technology works as described The process I used is simple I took some photos I know I am not a photographer selected for a mix of detail and low gradient areas compressed them to 5 using GIMP with all Jpeg optimization enabled took notice of size then encoded the same source image with the WebP cwebp encoder without any parameter twiddling using the size command line to match the size of the compressed Jpeg file The WebP image was then decoded as PNG The full set was uploaded to Flickr here and here are some of the results Photo Congress Centre Berlin Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size Photo LinuxNacht Berlin Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size Saltzburg castle Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size Venice Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size There is an obvious conclusion at small file sizes WebP handily beats Jpeg and a good Jpeg encoder the libjpeg based one used by GIMP by a large margin Using a jpeg recompressor and repacker it is possible to even a little bit the results but only marginally With some test materials like cartoons and anime the advantage increases substantially I can safely say that given these results WebP is a quite effective low bitrate encoder with substantial size advantages over Jpeg image quality jpeg webm webp This entry was posted on Thursday April 21st 2011 11 18 am and is filed under blog divertissements You can follow any responses to this entry through RSS 2 0 You can leave a response or trackback from your own site Comments 1 Trackbacks 0 1 by kidjan July 16th 2011 at 22 47

    Original URL path: http://carlodaffara.conecta.it/a-small-webp-test/index.html (2016-02-18)
    Open archived version from archive

  • image quality « carlodaffara.conecta.itcarlodaffara.conecta.it
    of my trusted Nokia N97 and tried to convert them in a sensible way Before flaming me about the fact that the images were not in raw format I know it thank you My objective is not to perform a perfect test but to verify Google assumptions that WebP can be used to reduce the bandwidth consumed by traditional already encoded images while preserving most of the visual quality This is not a quality comparison but a field test to see if the technology works as described The process I used is simple I took some photos I know I am not a photographer selected for a mix of detail and low gradient areas compressed them to 5 using GIMP with all Jpeg optimization enabled took notice of size then encoded the same source image with the WebP cwebp encoder without any parameter twiddling using the size command line to match the size of the compressed Jpeg file The WebP image was then decoded as PNG The full set was uploaded to Flickr here and here are some of the results Photo Congress Centre Berlin Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size Photo LinuxNacht Berlin Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size Saltzburg castle Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size Venice Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size There is an obvious conclusion at small file sizes WebP handily beats Jpeg and a good Jpeg encoder the libjpeg based one used by GIMP by a large margin Using a jpeg recompressor and repacker it is possible to even a little bit the results but only marginally With some test materials like cartoons and anime the advantage increases substantially I

    Original URL path: http://carlodaffara.conecta.it/tag/image-quality/index.html (2016-02-18)
    Open archived version from archive