From static website to the era of “cookies”: The evolution of the web

The web technology as we know it now has not always been like this, in this article, we look at how it has evolved over the years:

HTML Static website

Saluting the web legacy; the idea of the web was conceived by Tim Berners-Lee in 1989 as a way to create a sharing space for any network-accessible information (Berners-Lee 1998). The first web came into being in the early 90s and was tagged “Web 1.0” It was a read-only web consisting of static HTML web pages created mainly by large organizations, researchers and businesses for broadcasting their information, product catalogues etc to anyone and to establish an online presence. Contents creation was done by the owners and there was no interaction by the people other than the ability to read the contents.

The growth of the Internet in the mid 90s led to the adoption of the web technology, new languages and scripting languages (notably PHP were created in 1995) and browsers were introduced until the web evolved to a stage officially defined by Tim O'Reilly and Dale Doughert in 2004 as web 2.0 (O'Reilly 2005)

Web 2.0 emerged as a two-way web, one which allows contents creation by both the owners and the people (audience) eg blogs, wikis etc, allowing them to share opinions and build the contents through comments. Websites moved from being an information portal to platforms; people could access services online and create online profiles, contents were dynamically created using persistent data store such as the database. Ownership became tilted towards the community rather than just the organizations that put up the web applications. Because the type of applications developed at this time were aimed at allowing easier collaboration, design and interactivity became important as technologies such as Asynchronous JavaScript and XML (AJAX) were introduced.  It is important to note that this era witnessed the emergence of social networking sites.

A third generation of the web, Web 3.0 was described in 2006 by John Markoff in the New York Times as an “Intelligent web” (Markoff 2006).

With web3.0, web pages are built and linked in a way that makes it easier for both the humans and machines to understand the contents thus allowing machines to gather data from users’ interaction with the web, without explicitly supplying the data through filling of forms etc. Techniques such as the cookies are used to track user data; the collected data are analyzed and used to develop systems that can make predictions and recommendations to users. The current Amazon website for example, models the web3.0

Representing web cookies

Just like in any other statistical work, a reasonable volume of data is needed to be able to arrive at these predictions; websites that are optimizing the full features of web3.0 make use of cookies and other techniques to collect as much data as possible about their users.

This form of data is huge and are regarded to as ‘big data’ , they are often unstructured; for example, in the web2.0 technique, a user might be asked to fill a form and the fields on the form would match the fields in the database table, however with big data, we are trying to collect as much data as possible - defining fields in the database tables to store these form of data would be overwhelming, records may not all look similar etc

Associated with big data is the need for a fast processing of the data and a big storage provisioning; this level of computation requires very high end computers with high processing powers that may not be available in a single computer. The new challenge is being solved with the concept of Distributed computing (we will look at this in another article)

After collecting the data, they are analysed and used to build intelligence into the applications, the machine (server) also learns about the users so that predictions can become more accurate with more and more data. Web3.0 also allows data from different sources to link together for better intelligence. For example, your data on Linkedin could be used on Facebook to suggest friends to you, your search for an item on Amazon could be used to target adverts at you on Facebook.

From the above, one can see that the various web evolution stages are defined by the increasing amount of interactivity. While web1.0 was one-way communication from the websites to the user, web2.0 allowed interaction from the users as well. Web3.0 allows interaction between the web application, the user and the server (machine). At the optimum use of all the capabilities of the current web3.0, we expect web4.0

Various models are being suggested for web4.0; ranging from a web, that will act like a typical operating system, to a web that is so intelligently linked together and has enough data about their users so as to be able to read web contents and make an accurate decision on behalf of the user. Google Duplex, an AI Assistant is probably giving us a glimpse into this future or could we say it is still exhibiting the features of the intelligent web? What is your own prediction for web4.0?

Bibliography

Berners-Lee, T. 1998. The World Wide Web: A very short personal history. Available at: http://www.w3.org/People/Berners-Lee/ShortHistory.html [Accessed: 1 June 2018].

Markoff, J. 2006. Entrepreneurs See a Web Guided by Common Sense. Available at: https://www.nytimes.com/2006/11/12/business/12web.html [Accessed: 3 June 2018].

O'Reilly, T. 2005. What Is Web 2.0. Available at: https://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html [Accessed: 3 June 2018].

Leave a Reply