Following our first article in the “Future of the Internet” series earlier this week, we have another story for another era. “Blending the Browser” focused on the client-side of things, how the software developers will adapt their programs to meet the needs of corporate and home users around the world. Part two of this series covers the internet protocols themselves and the impact it will have on the web as we know it.
Judging by previous trends and publications, HTTP is here to stay. As far as server-based communications are concerned, HTTP offers a very versatile and easy-to-use universal medium of communication, at decent speeds, and with relatively few draw-backs. Even if a better protocol could be implemented, it is highly doubtful that it any from-scratch implementation will ever replace HTTP simply because of how widespread it has become, and how much everything depends on it.
But HTTP is just a protocol. Nothing more, nothing less. It’s an invisible standard in the background that tells browsers how to receive traffic and asks servers for information. By itself, it means nothing; and by itself, it has no effect on the future of the internet. What matters most is the type of data and the way it’s used.
At the moment the technological communities around the world are in the process of repairing a decade of mistakes. In order to universally share data and communicate across any and all borders, standardization and not variety is the key to success. The number of HTML attributes available to a browser isn’t as important as the number of browsers that can understand it. As we are now realizing, the best and most useful form of communication comes by transmitting data in as standard and universal a manner as possible.
At the moment the craze is RSS/RDF feeds, but they aren’t going to last. They are serving as the most important corner-stones for the foundation of a building that will soon enough come, but as of now ‘feeds’ as we know them are just the beginning. Standardized content goes a lot further than an article with a title, date, content, and comments. RDF paves the way for the storage of any and all data in a universal format.
Ten years from now, HTML will be extinct. As the transition is made from humans to machines for everything from sifting through news to searching the web for research, it will become blatantly obvious that HTML is as inefficient as one can get. With HTML there is, for lack of a better word, too much useless text. HTML takes too much space to deliver too little hard-core information. In today’s fast-paced world it can hardly keep up – in tomorrow’s it won’t even stand a chance.
A more recent and steadily-growing trend is the use of XML + XSLT Transformations. This ingenious invention takes the original idea of W3C Valid (X)HTML to the next level. In valid XHTML the key point is the separation of Content from Display. CSS stores any/all info related to the manner of displaying text and information while the HTML file itself stores the text and ‘meat’ of the article/site in question. But it’s not enough.
With XML/XSLT, all data is stored in the XML file. If you view a forum post for instance, each and every entry is a tag in the XML file, and the XSLT page tells your browser to stuff all that data in a Table or Div if it chooses and makes it look pretty for human consumption. But it keeps the raw data separate – ready and ripe for picking by bots and intranet scripts.
But this is just the beginning. The internet of tomorrow won’t be a tangled web that spans the world without sense or order. As HTML goes away and real data is valued over meaningless words and advertisements, the entire online web becomes one huge relational file system with clearly laid out relationships between every bit of data and the next. For example, in an SQL database on one site, a table is linked to another, and a script puts together the information to present it in a way that lets information pile and aggregate to get what the user wants.
Once the entire web becomes nothing more than indexes in a site, search engines as we know them will cease to exist. The plethora of real data, numbers, facts, and research laid out in a logical and machine-readable format will allow simple scripts to fill in the gaps, dotting the i’s and crossing the t’s by linking every bit of data together.
The future is all about universal data communication and standardization of information and data. In ten to fifteen years, the world will have realized that websites are just another pretty face – meaningless unless they contain information of interest and/or data of value. Once bots and scripts collect the data, it’s entirely up to them how and when it’s displayed for the user to study or read.
It is interesting to note that the origins of the web were standards that haven’t changed, even now, 20 years later. The most basic of information exchange has taken place since the earliest days of the web in newsgroups. Since their conception, they alone have had a predictable and solid lifespan free of bandwagon fluctuations and fan surges. Even today in the world of Google, blogs, wikis, and forums, many people will go back to the newsgroups when they really need an answer or have something of worth to discuss.
The future of the internet, the common communication platform will highly resemble such a newsgroup; with deeply co-related data at heart, and fluff kept to a minimum. While today Web 2.0 may be the craze, and the perception & impression a site gives is what matters most, at the end of the day machines don’t know nor do they appreciate aesthetic beauty: it’s what’s inside that counts.