Why should a (bioinformatics) scientist learn web development ?
Up to now bioinformatics research with genomics datasets, has been happening like that: you download the data from a website of a big-iron institution (NCBI, TAIR), set them up locally, BLAST ‘em, MySQL’em, parse them with Perl script, and do all other sorts of un-imaginable things. Even though bioinformaticians might be un-aware of the term, part of the local processing that happens with the data is a mashup. This term translates to the combination of pieces of data from different sources, something akin to what has been happening on the web (see also Web 2.0 or programmable web). In no way this is close to the myriad of Web 2.0 mashups that exist out there, created using APIs offered openly by different servers. In this case different sets of data are brought together by the mashup developer, who also adds value to them through their re-combination (and reciprocally adds value to the providing server, through spreading out and offering a better view of their offered data).
While the big-iron bioinformatics institutions don’t quite live in a parallel universe from Web 2.0 (we have to credit the NCBI server for its GCI interface), there are light years away from the programmable web. That is both because of the technologies their are using (forget about Ruby on Rails and REST), but also because of the small number of institutions like NCBI offering APIs.
So why should a (bioinformatics) scientist learn web development ? Because this situation I am describing above will change. These bioinformatics institutions will adopt Web 2.0 at some point during the next years - I can bet you now that, OK maybe in 5 years, we will have an NCBI running a nice REST API backed by Rails or Django. But it might happen even earlier, when people take things at their own hands. And for that I refer you to Amazon Web Services, where bioinformaticians can build their own NCBI running on Rails and sell it to other Web 2.0-minded scientists, who understand the (added) value of an inter-operable web of data.
Be part of XTractor community.
- XTractor the first of its kind Web 2.0 - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
- XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
- Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
- The categorized sentences could then be tagged and shared across multiple users
- Provides users with the ability to create his own database for a set of Key terms
- Users could change the Keywords of preference from time to time, with changing research needs
- XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate
Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.