web-archive-it.com » IT » C » CONECTA.IT

Total: 359

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • 04 « July « 2011 « carlodaffara.conecta.it
    a differentiating element anymore There goes the various built on open source components of some companies practically all companies are using open source inside It s simply not a difference So let s start with what is the real differential between OSS and proprietary The licensing An open license may introduce a difference for the adopter This means that if such a differential is used by the company it must provide a value that derives from the intrinsic property of open source as a legal framework For example independence from supplier at least theoretically both in case of provider change and independence in terms of adding or integrating additional components even if the company is in disagreement The development model The collaborative development model is not a certainty it arises only when there is a clear infrastructure for participation When it does happen it is comparatively much faster and more efficient than the proprietary and closed model For this to be a real differentiator the company must engage in an open development model and this is actually happening only in a very small number of cases In general the majority of companies that we surveyed in FLOSSMETRICS have now a limited degree of differentiation when compared to their peers and even as a signaling open source is now no more interesting than other IT terms that entered the mainstream we can discuss further whether cloud will disappear in the background as well Of the companies we surveyed I would say that those that we marked originally as specialists are the ones more apt to still use open source as a differentiating term with open core ones the least since they don t reap the advantages of a distributed development model neither the adopter reaps the advantages of the open source

    Original URL path: http://carlodaffara.conecta.it/2011/07/04/index.html (2016-02-18)
    Open archived version from archive

  • 12 « July « 2011 « carlodaffara.conecta.it
    private tree at the moment the 3 x branch that is also based on open source projects but is at the moment internal to Google and the partners that decided to join its Open Handset Alliance Some projects are clearly behind their open counterparts especially WebKit while at the same time maintaining substantial external off tree patches that are extremely difficult to integrate like in the Linux Kernel There is an additional layer of totally proprietary apps that are installable only for those partners and phones that subjugate themselves to an additional set of requirements and licensing rules something that for example caused lots of problems for Motorola and Google in the SkyHook lawsuit This will probably continue and will be the real battleground given the fact that Android and iOS are clearly emerging as the leading platforms Could it have been different I think so And by releasing some degree of control Google could have created a much safer and open environment while sacrificing very little Let s start with the point that what Google really really wants is an unencumbered rich internet enabled platform that is not constrained by third parties and that it uses Google services The reality is that Google lives off advertising at the moment at last and is trying to expand it to other services like enterprise mail documents and so on there are two roads to do that the first is to create a platform that is strongly tied to Google services so that it is nearly impossible to escape its control In doing so however you face the tension of the OEM and vendors that may want to switch services or that want to push internally developed offerings In this case they will have nothing left to do but to go with the competition or create their own variant something that already happened increasing the adoption costs The alternative is the A rising tide lifts all boats make it a purely open source project where there is a real distributed control like Eclipse Turn it to the Apache foundation Make all the interested partners a part of the foundation or consortium make the code public with limited exceptions like for prerelease hardware drivers and try to track as much as possible the projects where you take things from Apply a strict code contribution regime to strengthen your position against IP claims and especially don t turn it into a product Yes you read it properly the code should be a strict open source project This way it would be extremely difficult for an external party to sue the origin given the difficulties in identifying a directly derived infringing device Google could have then using this more sanitized and cleaner base provided insurance through a third party for IP infringement claims if the code base adopter would want to use such an opportunity some may decide to fight on their own of course This implies that an Android OEM can substitute the Google

    Original URL path: http://carlodaffara.conecta.it/2011/07/12/index.html (2016-02-18)
    Open archived version from archive

  • 22 « July « 2011 « carlodaffara.conecta.it
    underwrite the production of the code But dentistry like most things in western society tends to be a for profit competitive enterprise If everyone gets the benefit of the software since it s FOSS but a smaller group pays for it the rest of the dentists get a competitive advantage So there is no incentive for a subset of the group to fund the effort There goes the main argument since software is given for free and someone else is getting advantages for free free riding will quickly destroy any incentive This is based on many wrong assumptions the first of which is that the market is always capable to provide a good product that matches the needs of its users This is easily found out to be wrong as the example of SAKAI and Kuali demonstrate both products were developed because the proprietary tools used by the initial group of universities were unable to meet the requirements and the costs were so high that developing reusing open source was a better alternative And consider that Kuali is exactly the kind of software that Turner identifies as non sexy that is a financial management system if you want more examples of the medical nature go to VISTA OpenClinica or to meet Turner article OpenDental The reality is that some kind of software is essential most of the software is non differentiating and maintaining that software as open source has the potential to reduce costs and investment substantially for the actual data check this post Another variant is to propose that the software will be developed and given away and the developers will make their living by charging for support Leaving alone the cynical idea that this would be a powerful incentive to write hard to use software it also suffers from a couple of major problems To begin with software this complex might take a team of 10 people one or more years to produce Unless they are independently wealthy or already have a pipeline of supported projects there s no way they will be able to pay for food and college while they create the initial product Great Turner now discovered one of the possible open source based business models we classified in FLOSSMETRICS Of course he conveniently forgot that by reusing open source this hapless group of starved developers can create their product for one tenth of the cost and the time So the same property that Turner thinks can doom this poor group of misguided developers can actually provide them with the solution as well And once they do the source is free and available to everyone including people who live in areas of the world with much lower costs and standards of living What is going to stop someone in the developing world from stepping in and undercutting support prices It strikes me as an almost automatic race to the bottom That s called competition Never heard of it Despite the race to the bottom that

    Original URL path: http://carlodaffara.conecta.it/2011/07/22/index.html (2016-02-18)
    Open archived version from archive

  • 05 « May « 2011 « carlodaffara.conecta.it
    on the hardware they already have The economics here is related to the reduction in management costs and the opportunity to simplify code deployment The Telcos my own sources indicate that lots of telcos are working feverishly to create their own cloud offering now that there is a potential solution that has no license costs They could have built it earlier with VMWare or Citrix but the licensing alone would have placed them out of the market with the new open source stacks with a relatively limited investment it is possible to reuse the massive hardware cache already in house to consolidate and offer a IaaS to individual customers usually mid to large companies willing to outsource their varying computational loads outside or directly dematerialize their server farm It is mostly a transitional market that will however be important for at least 5 years from now The System Integrators large scale INT and consultancies are pushing a set of integrated offerings that cover everything technical management legal and procurement aspects for companies that are willing to move their IT completely off their backs It is not a new model Accenture Atos Origin et al are doing it since the beginning of time but thanks to the availability of open source components it does provide a more compelling economics The small VARs there is a large number of very small companies that targets the very low end of the market and that provide a sort of staircase to the cloud for small and mostly static workloads It is a market of small consultants system administrators and consultancies that cover that part of the market that is largely invisible for larger entities and that is starting to move towards web apps hosted email and so on but still need to manage some legacy system that is currently in house The service provider it just started as a movement but in my opinion will become quite big the specialized and cloud based service an example is the freshly released Sencha io I believe that those specialized providers will be among the most important contributors to the individual OSS components that are internally used to deliver a service and will provide a backbone of updates for most of the ancillary parts like storage DB identity and so on And the obvious question is who will win In my view if every actor works properly there will be more than a dominant actor The reason is related to the two main factors of adoption that is packaging and acceleration Packaging is the ease of installation deployment management and in general user and developer friendliness of the whole So taking for granted that RedHat will open source it as it did with all the previous acquisitions the main advantage for OpenShift and CloudForms will be ease of installation for RedHat and clone users that are still the majority of enterprise Linux deployments It will be natural for a RHEL user to start converting some servers into

    Original URL path: http://carlodaffara.conecta.it/2011/05/05/index.html (2016-02-18)
    Open archived version from archive

  • 25 « May « 2011 « carlodaffara.conecta.it
    remotization approaches How many servers would be necessary to support half a billion Facebook users if their app was not web based but accessed through an ICA or RDP layer The advantages are so overwhelming that nowadays most new enterprise apps are exclusively web based even if hosted on local premises It is designed to reduce maintenance costs The use of the Google synchronization features and identity services allows for management as a service to be introduced fairly easily most thin clients infrastructures require a local server to act as a master for the individual client groups and this adds costs and complexities The extremely simple approach used to provide replication and management is also easy to grasp and extremely scalable provisioning from opening the box to begin useful work is fast and requires no external support only an internet connection and a login This form of self service provisioning despite brochure claims is still unavailable for most thin client infrastructures and when available it is extremely costly in terms of licensing Updates are really failsafe Ed Bott commented that Automatic updates are a nightmare and it s clear that he has quite a deep experience with the approach used by Microsoft It is true that automatic updates are not a new thing but the approach used by ChromeOS is totally different from the one used by MS Apple or most Linux distributions ChromeOS updates are distributed like satellite set top box updates whole integrated new images sent through a central management service The build process ensures that things work because if something happens during the build the image will simply not be built and distributed Companies scared by the latest service pack roulette will it run will it stop in the middle making half of the machines left broken in the rain should be happy to embrace a model where only working updates are distributed And just as a comment this model is possible only because the components with the exception of Flas are all open source and thus rebuildable To those that still doubt that such a model can work I suggest a simple experiment go to the c hromiumos developer wiki download the source and try a full build It is quite an instructive process The Citrix receiver thing is a temporary stopgap If there is a thing that confused the waters in the past it was the presentation done by Citrix of the integrated Receiver for the ICA protocol It helped to create the misconception that ChromeOS is a thin client operating system and framed in a wrong way the expectations of users The reality is that Google is pushing for a totally in browser HTML5 app world ICA RDP and other remotization features are there only to support those legacy app that are not HTML5 enabled Only when the absolute majority of apps are web based the economics of ChromeOS makes sense On the other hand it is clear that there are still substantial hurdles

    Original URL path: http://carlodaffara.conecta.it/2011/05/25/index.html (2016-02-18)
    Open archived version from archive

  • Index of /2011
    54 02 28 Apr 2015 08 54 03 28 Apr 2015 08 54 04 28 Apr 2015 08 54 05 28 Apr 2015 08 54 07 28 Apr 2015 08 54 09 28 Apr 2015 08 54 11 28 Apr

    Original URL path: http://carlodaffara.conecta.it/2011/?p=267 (2016-02-18)
    Open archived version from archive

  • 07 « April « 2011 « carlodaffara.conecta.it
    to contribute their work you need an advantage for them as well What can this advantage be For Eclipse most of the companies developing their own integrated development environment IDE found it economically sensible to drop their own work and contribute to Eclipse instead It allowed them to quickly reduce their maintenance and development costs while increasing their quality as well The Symbian foundation should have done the same thing but apparently missed the mark despite having a large number of partners and members Why The reason is time and focus The Eclipse foundation had for quite some time basically used only IBM resources to provide support and development In a similar way it took WebKit which is not quite a foundation but follows the same basic model more than two years before it started receiving substantial contributions as can be found here And WebKit is much much smaller than Symbian and Eclipse For Symbian I would estimate that it would require at least three or four years before such a project could start to receive important external contributions That is unless it is substantially re engineered so that the individual parts some of which are quite interesting and advanced despite the claims that Symbian is a dead project can be removed and reused by other projects as well This is usually the starting point for long term cooperation Some tooling was also not in place from the beginning the need for a separate compiler chain one that was not open source and that in many aspect was not as advanced as open source ones was an additional stumbling block that delayed participation Another problem was focus More or less anyone understood that for a substantial period of time Symbian would be managed and developed mainly by Nokia And Nokia made a total mess of differentiating what part of the platform was real what was a stopgap for future changes what was end of life and what was the future Who would invest in the long term in a platform where the only entity that could gain from it was not even that much committed to it And before flaming me for this comment let me say that I am a proud owner of a Nokia device I love most Nokia products and I think that Symbian still could have been a contender especially through a speedier transition to Qt for the user interface But the long list of confusing announcements and delays changes in plans and lack of focus on how to beat the competitors like iOS and Android clearly reduced the willingness of commercial partners to invest in the venture Which is a pity Symbian still powers most phones in the world and can still enter the market with some credibility But this later announcement sounds like a death knell Obtain the source code through a DVD or USB key You must be kidding Do you really think that setting up a webpage with the code and preserving

    Original URL path: http://carlodaffara.conecta.it/2011/04/07/index.html (2016-02-18)
    Open archived version from archive

  • 21 « April « 2011 « carlodaffara.conecta.it
    and tried to convert them in a sensible way Before flaming me about the fact that the images were not in raw format I know it thank you My objective is not to perform a perfect test but to verify Google assumptions that WebP can be used to reduce the bandwidth consumed by traditional already encoded images while preserving most of the visual quality This is not a quality comparison but a field test to see if the technology works as described The process I used is simple I took some photos I know I am not a photographer selected for a mix of detail and low gradient areas compressed them to 5 using GIMP with all Jpeg optimization enabled took notice of size then encoded the same source image with the WebP cwebp encoder without any parameter twiddling using the size command line to match the size of the compressed Jpeg file The WebP image was then decoded as PNG The full set was uploaded to Flickr here and here are some of the results Photo Congress Centre Berlin Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size Photo LinuxNacht Berlin Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size Saltzburg castle Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size Venice Top original Jpeg middle 5 Jpeg Bottom WebP at same Jpeg size There is an obvious conclusion at small file sizes WebP handily beats Jpeg and a good Jpeg encoder the libjpeg based one used by GIMP by a large margin Using a jpeg recompressor and repacker it is possible to even a little bit the results but only marginally With some test materials like cartoons and anime the advantage increases substantially I can safely say that given

    Original URL path: http://carlodaffara.conecta.it/2011/04/21/index.html (2016-02-18)
    Open archived version from archive