Web History (Evaluation of Web 1.0, Web 2.0 and Web 3.0)

Web History

Evaluation of Web 1.0, Web 2.0 and Web 3.0 (Web History)

This blog explores the complete history of the web from beginning to till date (2019). This blog explains the complete web history with Evaluation of Web 1.0, Web 2.0 and Web 3.0. 

 

Evolution of the Web: Web 1.0

Web 1.0

To start with, most websites were just a collection of static web pages. The shallow web, also known as the static web, is primarily a collection of static HTML web pages providing information about products or services offered.

 

After a while, the web became dynamic, delivering web pages created on the fly. The ability to create web pages from the content stored on databases enabled web developers to provide customized information to visitors.

 

These sites are known as the deep web or the dynamic web. Though a visitor to such websites gets information attuned to his or her requirements, these sites provide primarily one-way interaction and limited user interactivity.

 

The users have no role in the content generation and no means to access content without visiting the sites concerned. The shallow websites and deep websites, which have none or minimal user interaction, are now generally termed as Web 1.0.

 

Web 2.0

In the last few years, a new class of web applications, known as Web 2.0 (or Service-Oriented Applications), has emerged.

 

These applications let people collaborate and share information online in seemingly new ways—examples include social networking sites such as myspace.com, media sharing sites such as YouTube.com, and collaborative authoring sites such as Wikipedia.

 

These second-generation webs offer smart user interfaces and built-in facilities for users to generate and edit content presented on the web and thereby enrich the content base.

 

Besides leveraging the users’ potential in generating content, Web 2.0 applications provide facilities to keep the content under the user’s own categories (tagging feature) and access it easily (web feed tool).

 

This new version of web applications is also able to integrate multiple services under a rich user interface.

 

With the incorporation of new web technologies such as Asynchronous JavaScript and XML (AJAX), Ruby, blog, wiki, social bookmarking, and tagging, the web is fast becoming more dynamic and highly interactive, where users can not only pick content from a site but can also contribute to it.

 

The web feed technology allows users to keep up with a site’s latest content without having to visit it.

 

Another feature of the new web is the proliferation of websites with APIs (application programming interfaces). An API from a web service facilitates web developers in collecting data from the service and creating new online applications based on these data.

 

Web 2.0 is a collection of technologies, business strategies, and social trends. The Web 2.0 is a highly interactive, dynamic application platform than its predecessor Web 1.0.

 

Weblogs or Blogs

Blogs

With the advent of software like Wordpress and Typepad, along with blog service companies like blogger.com, the weblog is fast becoming the communication medium of the new web. Unlike traditional Hypertext Markup Language (HTML) web pages, blogs offer the ability for the non-programmer to communicate on a regular basis.

 

Traditional HTML-style pages required knowledge of style, coding, and design in order to publish content that was basically read-only from the consumer’s point of view. Weblogs remove much of the constraints by providing a standard user interface that does not require customization.

 

Weblogs originally emerged as a repository for linking but soon evolved to the ability to publish content and allow readers to become content providers. The essence of a blog can be defined by the format that includes small chunks of content referred to as posts, date stamped, maintained in the reverse chronological order, and content expanded to include links, text, and images.

 

The biggest advancement made with weblogs is the permanence of the content that has a unique Universal Resource Locator (URL). This allows the content to be posted along with the comments to define a permanent record of information.

 

This is critical in that having a collaborative record that can be indexed by search engines will increase the utility and spread the information to a larger audience.

 

Wikis

Wikis

A Wiki is a website that promotes the collaborative creation of content. Wiki pages can be edited by anyone at any time. Informational content can be created and easily organized within the wiki environment and then reorganized as required. Wikis are currently in high demand in a large variety of fields, due to their simplicity and flexibility.

 

Documentation, reporting, project management, online glossaries, and dictionaries, discussion groups, or general information applications are just a few examples of where the end user can provide value.

 

While stating that anyone can alter content, some large-scale wiki environments have extensive role definitions that define who can perform functions of the update, restore, delete, and creation.

 

Wikipedia, like many wiki-type projects, has readers, editors, administrators, patrollers, policy makers, subject matter experts, content maintainers, software developers, and system operators. All of which create an environment open to sharing information and knowledge to a large group of users.

 

RSS Technologies

Originally developed by Netscape, RSS was intended to publish news-type information based on a subscription framework. Many Internet users have experienced the frustration of searching Internet sites for hours at a time to find relevant information.

 

RSS is an XML-based content-syndication protocol that allows websites to share information as well as aggregate information based on the users’ needs.

 

In the simplest form, RSS shares the metadata about the content without actually delivering the entire information source. An author might publish the title, description, publish date, and copyrights to anyone that subscribes to the feed.

 

The end user is required to have an application called an aggregator in order to receive the information. By having the RSS aggregator application, end users are not required to visit each site in order to obtain information.

 

From an end user perspective, the RSS technology changes the communication method from a search and discover a notification model. Users can locate content that is pertinent to their job and subscribe to the communication that enables a much faster communication stream.

 

Social Tagging

Social Tagging

Social tagging describes the collaborative activity of marking shared online content with keywords or tags as a way to organize content for future navigation, filtering, or search. Traditional information architecture utilized a central taxonomy or classification scheme in order to place information into specific predefined bucket or category.

 

The assumption was that trained librarians understood more about information content and context than the average user. While this might have been true for the local library, the enormous amount of content on the Internet makes this type of system unmanageable.

 

Tagging offers a number of benefits to the end user community. Perhaps the most important feature to the individual is able to bookmark the information in a way that is easier for them to recall at a later date.

 

The idea of social tagging is allowing multiple users to tag content in a way that makes sense to them, by combining these tags; users create an environment where the opinions of the majority define the appropriateness of the tags themselves.

 

The act of creating a collection of popular tags is referred to as a folksonomy that is defined as a folk taxonomy of important and emerging content within the user community.

 

The vocabulary problem is defined by the fact that different users define content in different ways. The disagreement can lead to missed information or inefficient user interactions.

 

One of the best examples of social tagging is Flickr that allows the user to upload images and tag them with appropriate metadata keywords. Other users, who view your images, can also tag them with their concept of appropriate keywords. After a critical mass has been reached, the resulting tag collection will identify images correctly and without bias.

 

Other sites like iStockPhoto have also utilized this technology but more along the sales channel versus the community one.

 

Mashups: Integrating Information

The final Web 2.0 technology describes the efforts around information integration, commonly referred to as mashups. These applications can be combined to deliver additional value that the individual parts could not on their own:

 

1.HousingMaps.com combines the Google mapping application with a real estate listing service on Craiglists.com.

 

2.Chicagocrime.org overlays local crime statistics on top of Google Maps so end users can see what crimes were committed recently in the neighborhood.

3. Another site synchronizes Yahoo! Inc.’s real-time traffic data with Google Maps.

 

Much of the work with web services will enable greater extensions of mashups and combine many different businesses and business models. Organizations, like Amazon and Microsoft, are embracing the mashup movement by offering developers easier access to their data and services.

 

Moreover, they’re programming their services so that more computing tasks, such as displaying maps onscreen, get done on the users’ Personal Computers rather than on their far-flung servers.

 

User Contributed Content

Web 2.0

One of the basic themes of Web 2.0 is user-contributed information. The value derived from the contributed content comes not from a subject matter expert, but rather from individuals whose small contributions add up. 

 

Comparison between Web 1.0 and Web 2.0

Web 1.0 Characteristics Web 2.0 Characteristics

  • Static content Dynamic content
  • Producer-based information Participatory-based information
  • Messages pushed to consumer Messages pulled by the consumer
  • Institutional control Individual enabled
  • Top-down implementation Bottom-up implementation

 

Users search and browse Users publish and subscribe Transactional-based interactions Relationship-based interactions Goal of mass adoption Goal of niche adoption


Taxonomy Folksonomy

the product review systems like Amazon.com and reputation systems used with eBay. com. A common practice of online merchants is to enable their customers to review or to express opinions on the products they have purchased.

 

Online reviews are a major source of information for consumers and demonstrated enormous implications for a wide range of management activities, such as brand building, customer acquisition and retention, product development, and quality assurance.

 

A person’s reputation is a valuable piece of information that can be used when deciding whether or not to interact or do business with. A reputation system is a bidirectional medium where buyers post feedback on sellers and vice versa.

 

For example, eBay buyers voluntarily comment on the quality of service, their satisfaction with the item traded, and promptness of shipping. Sellers comment about the prompt payment from buyers or respond to comments left by the buyer.

 

Reputation systems may be categorized into three basic types: ranking, rating, and collaborative. Ranking systems use quantifiable measures of users’ behavior to generate and rating. Rating systems use explicit evaluations given by users in order to define a measure of interest or trust.

 

Finally, collaborative filtering systems determine the level of relationship between the two individuals before placing a weight on the information. For example, if a user has reviewed similar items in the past, then the relevancy of a new rating will be higher.

 

Web 3.0

Web 3.0

In current web applications, information is presented in natural language, which humans can process easily; but computers cannot manipulate natural language information on the web meaningfully.

 

The semantic web is an extension of the current web in which information is given well-defined meaning, better enabling computers and universal medium for information exchange by putting documents with computer-processable meaning (semantics) on the web.

 

Adding semantics radically changes the nature of the web—from a place where information is merely displayed to one where it is interpreted, exchanged, and processed.

 

Associating meaning with content or establishing a layer of machine-understandable data enables a higher degree of automation and more intelligent applications and also facilitates interoperable services.

 

Semantic web technologies will enhance Web 2.0 tools and its associated data with semantic annotations and semantic knowledge representations, thus enabling better automatic processing of data that in turn will enhance search mechanisms, management of the tacit knowledge, and the overall efficiency of the actual KM tools.

 

The benefits of semantic blogging, semantic wikis or semantic Wikipedia, semantic-enhanced social networks, semantic-enhanced KM and semantic-enhanced user support will increase its benefits multifold.

 

The ultimate goal of the semantic web is to support machine-facilitated global information exchange in a scalable, adaptable, extensible manner, so that information on the web can be used for more effective discovery, automation, integration, and reuse across various applications.

 

The three key ingredients that constitute the semantic web and help achieve its goals are semantic markup, ontology, and intelligent software agents.

 

Mobile Web

Mobile Web

With numerous advances in mobile computing and wireless communications and widespread adoption of mobile devices such as smart mobile, the Web is increasingly being accessed using handheld devices.

 

Mobile web applications could offer some additional features compared to traditional desktop web applications such as location-aware services, context-aware capabilities, and personalization.

 

The Semantic Web

While the web keeps growing at an astounding pace, most web pages are still designed for human consumption and cannot be processed by machines. Similarly, while web search engines help retrieve web pages, they do not offer support to interpret the results—for that, human intervention is still required.

 

As the size of the search results is often just too big for humans to interpret, finding relevant information on the web is not as easy as we would desire.

 

The existing web has evolved as a medium for information exchange among people, rather than machines. As a consequence, the semantic content, that is, the meaning of the information on a web page is coded in a way that is accessible to human beings only.

 

Today’s web may be defined as the syntactic web, where information presentation is carried out by computers, and the interpretation and identification of relevant information are delegated to human beings.

 

With the volume of available digital data growing at an exponential rate, it is becoming virtually impossible for human beings to manage the complexity and volume of the available information. This phenomenon often referred to as information overload, poses a serious threat to the continued usefulness of today’s web.

 

As the volume of web resources grows exponentially, researchers from industry, government, and academia are now exploring the possibility of creating a semantic web in which meaning is made explicit, allowing machines to process and integrate web resources intelligently.

 

Biologists use a well-defined taxonomy, the Linnaean taxonomy, adopted and shared by most of the scientific community worldwide. Likewise, computer scientists are looking for a similar model to help structure web content.

 

In 2001, Berners-Lee, Hendler, and Lassila published a revolutionary article in Scientific American titled “The Semantic Web: A New Form of Web Content That Is Meaningful to Computers Will Unleash a Revolution of New Possibilities.”

 

The semantic web is an extension of the current web in which information is given well- defined meaning, enabling computers and people to work in cooperation.

 

In the lower part of the architecture, we find three building blocks that can be used to encode text (Unicode), to identify resources on the web (URIs) and to structure and exchange information (XML). Resource Description Framework (RDF) is a simple, yet powerful data model and language for describing web resources.

 

The SPARQL Protocol and RDF Query Language (SPARQL) is the de facto standard used to query RDF data.

 

While RDF and the RDF Schema provide a model for representing semantic web data and for structuring semantic data using simple hierarchies of classes and properties, respectively, the SPARQL language and protocol provide the means to express queries and retrieve information from across diverse semantic web data sources.

 

The need for a new language is motivated by the different data models and semantics at the level of XML and RDF, respectively.

 

Ontology is a formal, explicit specification of a shared conceptualization of a particular domain—concepts are the core elements of the conceptualization corresponding to entities of the domain being described, and properties and relations are used to describe interconnections between such concepts.

 

Web ontology language (OWL) is the standard language for representing knowledge on the web. This language was designed to be used by applications that need to process the content of information on the web instead of just presenting information to human users.

 

Using OWL, one can explicitly represent the meaning of terms in vocabularies and the relationships between those terms. The Rule Interchange Format (RIF) is the W3C Recommendation that defines a framework to exchange rule-based languages on the web. Like OWL, RIF defines a set of languages covering various aspects of the rule layer of the semantic web.

 

Rich Internet Applications

Rich Internet Applications

Rich Internet applications (RIA) are web-based applications that run in a web browser and do not require software installation, but still, have the features and functionality of traditional desktop applications.

 

The term “RIA” was introduced in a Macromedia whitepaper in March 2002. RIA represents the evolution of the browser from a static request-response interface to a dynamic, asynchronous interface.

 

Broadband proliferation, consumer demand, and enabling technologies, including Web 2.0, are driving the proliferation of RIAs. RIAs promise a richer user experience and benefits—interactivity and usability that are lacking in many current applications. Some prime examples of RIA frameworks are Adobe’s Flex and AJAX, and examples of RIA include Google’s Earth, Mail, and Finance applications.

 

Enterprises are embracing the promises of RIAs by applying them to user tasks that demand interactivity, responsiveness, and richness. Predominant techniques such as HTML, forms, and CGI are being replaced by another programmer- or user-friendly approaches such as AJAX and web services.

 

Building a web application using fancy technology, however, does not ensure a better user experience. To add real value, developers must address.

 

Web Applications

Web Applications

Web applications’ operational environment and their development approach and the faster pace in which these applications are developed and deployed differentiate web applications from those of traditional software.

 

Characteristics of web applications are as follows:

Web-based systems, in general, demand good aesthetic appeal—“look and feel”— and easy navigation.

 

Web-based applications demand presentation of a variety of content—text, graphics, images, audio, and/or video—and the content may also be integrated with procedural processing.

 

Hence, their development includes the creation and management of the content and their presentation in an attractive manner, as well as a provision for subsequent content management (changes) on a continual basis after the initial development and deployment.

 

Web applications are meant to be used by a vast, diverse, remote community of users who have different requirements, expectations, and skill sets. Therefore, the user interface and usability features have to meet the needs of a diverse, anonymous user community.

 

Furthermore, the number of users accessing a web application at any time is problems—there could be a “flash crowd” triggered by major events or promotions.

 

Web applications, especially those meant for a global audience, need to adhere to many different social and cultural sentiments and national standards—including multiple languages and different systems of units.

 

Ramifications of failure or dissatisfaction of users of web-based applications can be much worse than conventional IT systems. Also, web applications could fail for many different reasons.

 

Successfully managing the evolution, change, and newer requirements of web applications is a major technical, organizational, and management challenge. Most web applications are evolutionary in their nature, requiring (frequent) changes in content, functionality, structure, navigation, presentation, or implementation on an ongoing basis.

 

The frequency and degree of change of information content can be quite high; they particularly evolve in terms of their requirements and functionality, especially after the system is put into use.

 

In most web applications, frequency and degree of change are much higher than in traditional software applications, and in many applications, it is not possible to specify fully their entire requirements at the beginning.

 

There is a greater demand on the security of web applications; security and privacy needs of web-based systems are in general more demanding than those of traditional software.

 

Web applications need to cope with a variety of display devices and formats, and support hardware, software, and networks with vastly varying access speeds.

 

The proliferation of new web technologies and standards and competitive pressure to use them bring its own advantages and also additional challenges to the development and maintenance of web applications.

 

The evolving nature of web applications necessitates an incremental developmental process.

 

Web Applications Dimensions

Web Applications

Presentation

Presentation technologies have advanced over time, such as in terms of multimedia capabilities, but the core technology of the web application platform, the Hypertext Markup Language (HTML), has remained relatively stable.

 

Consequently, application user interfaces have to be mapped to document-oriented markup code, resulting in impedance or a gap between the design and the subsequent implementation.

 

The task of communicating content in an appropriate way combines both artistic visual design and engineering disciplines. Usually, based on the audience of the website, there are numerous factors to be considered.

 

For example, in the international case, cultural differences may have to be accounted for, affecting not only languages but also, for example, the perception of color schemes.

 

Further restrictions may originate from the publishing organizations themselves that aim at reflecting the company’s brand with a corresponding corporate design or legal obligations with respect to accessibility.

 

Dialogue

Interactive elements in web applications often appear in the shape of forms that allow users to enter data that are used as input for further processing. More generally, the dialogue concern covers not only the interaction between humans and the application but rather between arbitrary actors (including other programs) and the manipulated information space.

 

The flow of information is governed by the web’s interaction model, which, due to its distributed nature, differs considerably from other platforms.

 

The interaction model is subject to variations, as in the context of recent trends toward more client-side application logic and asynchronous communication between client and server like in the case of AJAX focusing on user interfaces that provide a look and feel that resembles desktop applications.

 

Navigation

Navigation

In addition to the challenge of communicating information, there exists the challenge of making it easily accessible to the user without ending in the “lost in hyperspace” syndrome. This holds true even though the web makes use of only a subset of the rich capabilities of hypertext concepts, for example, allowing only unidirectional links.

 

Over time, a set of common usage patterns have evolved that aids them in navigating through new websites that may not have been visited before. Applied to web application development, navigation concepts can be extended for accessing not only static document content but also application functionality.

 

Process

The process dimensions relate to the operations performed on the information space that is generally triggered by the user via the web interface and whose execution is governed by the business policy.

 

Particular challenges arise from scenarios with frequently changing policies, demanding agile approaches with preferably dynamic wiring between loosely coupled components.

 

Beneath the user interface of a web application lies the implementation of the actual application logic, for which the web acts as a platform to make it available to the concerned stakeholders.

 

In case the application is not distributed, the process dimension is hardly affected by web-specific factors, allowing for standard non- web approaches like Component-Based Software Engineering to be applied. Otherwise, service-oriented approaches account for cases where the wiring extends over components that reside on the web.

 

Data

Data

Data are the content of the documents to be published; although content can be embedded in the web documents together with other dimensions like presentation or navigation, the evolution of web applications often demands a separation, using data sources such as XML files, databases, or web services. Traditional issues include the structure of the information space as well as the definition of structural linking.

 

In the context of the dynamic nature of web applications, one can distinguish between static information that remains stable over time and dynamic information that is subject to changes.

 

Depending on the media type being delivered, either the data can be persistent, that is, accessible independently of time, or it can be transient, that is, accessible as a flow, as in the case of a video stream.

 

Moreover, metadata can also describe other data facilitating the usefulness of the data within the global information space established by the web.

 

Similarly, the machine-based processing of information is further supported by semantic web approaches that apply technologies like the resource description framework (RDF) to make metadata statements (e.g., about web page content) and express the semantics about associations between arbitrary resources worldwide.

 

Internet technologies relevant for Web analysis

Web analysis

The proxy server is a network server which acts as an intermediary between the user’s computer and the actual server on which the website resides; they are used to improve service for groups of users.

 

First, it saves the results of all requests for a particular web page for a certain amount of time. Then, it intercepts all requests to the real server to see if it can fulfill the request itself. Say, user, A requests a certain web page; sometime later, user B requests the same page. 

 

Instead of forwarding the request to the web server where Page 1 resides, which can be a time-consuming operation, the proxy server simply returns the Page 1 that it already fetched for user A. Since the proxy server is often on the same network as the user, this is a much faster operation.

 

If the proxy server cannot serve a stored page, then it forwards the request to the real server. Importantly, pages served by the proxy server are not logged in the log files, resulting in inaccuracies in counting site traffic.

 

Major online services (such as Facebook, MSN, and Yahoo!) and other large organizations employ an array of proxy servers in which all user requests are made through a single IP address. This situation causes weblog files to significantly under-report unique visitor traffic.

 

On the other hand, sometimes home users with an Internet Service Provider get assigned a new IP address each time they connect to the Internet. This causes the opposite effect of inflating the number of unique visits in the weblogs.

 

2. Firewalls: For the purpose of security rather than efficiency, acting as an intermediary device, a proxy server can also function as a firewall in an organization. Firewalls are used by organizations to protect internal users from outside threats on the Internet or to prevent employees from accessing a specific set of websites.

 

Firewalls hide the actual IP address for specific user computers and instead present a single generic IP address to the Internet for all its users. Hence, this contributes to under-reporting unique visitor traffic in web analytics.

 

3. Caching refers to the technique in which most web browser software keeps a copy of each web page, called a cache, in its memory. So, rather than requesting the same page again from the server (for example, if the user clicks the “back” button), the browser on the computer will display a copy of the page rather than make another new request to the server.

 

Many Internet Service Providers and large organizations cache web pages in an effort to serve content more quickly and reduce bandwidth usage. As with the use of proxy servers, caching poses a problem because weblog files don’t report these cached page views. Again, as a result, weblog files can significantly under-report the actual visitor count.

 

Social Network Applications

Social Network Applications

Social computing is the use of social software, which is based on creating or recreating online social conversations and social contexts through the use of software and technology.

 

An example of social computing is the use of email for maintaining social relationships. Social Networks (SN) are social structures made up of nodes and ties; they indicate the relationships between individuals or organizations and how they are connected through social contexts.

 

SN operate on many levels and play an important role in solving problems, on how organizations are run and they help individuals succeed in achieving their targets and goals. Computer-based social networks enable people in different locations to interact with each other socially (e.g., chat and viewable photos) over a network.

 

SN are very useful for visualizing patterns: A social network structure is made up of nodes and ties: there may be few or many nodes in the networks or one or more different types of relations between the nodes.

 

Building a useful understanding of a social network is to sketch a pattern of social relationships, kinships, community structure, and so forth. The use of mathematical and graphical techniques in social network analysis is important to represent the descriptions of networks compactly and more efficiently.

 

Social Networks operate on many different levels from families up to nations and play a critical role in determining the way problems are solved, organizations are run and the degree to which people succeed in achieving their goals.

 

Popular Social Networks

Social Networks

This section briefly describes popular social networks like LinkedIn, Facebook, Twitter, and Google+.

 

LinkedIn

LinkedIn is currently considered the de facto source of professional networking. Launched in 2003, it is the largest business-oriented social network with more than 260 million users. This network allows users to find the key people they may need to make introductions into the office of the job they may desire.

LinkedIn

Users can also track friends and colleagues during times of promotion and hiring to congratulate them if they choose; this results in a complex social web of business connections.

 

In 2008, LinkedIn introduced their mobile app as well as the ability for users to not only endorse each other but also to specifically attest to individual skills that they may hold and have listed on the site. LinkedIn now supports more than 20 languages.

 

Users cannot upload their resumes directly to LinkedIn. Instead, a user adds skills and work history to their profile. Other users inside that social network can verify and endorse each attribute. This essentially makes a user’s presence on LinkedIn only as believable as the people they connect with.

 

Facebook

Facebook

Facebook was created by Mark Zuckerberg at Harvard College. Launched in 2004, it grew rapidly and now has more than a billion and a half users.

 

In 2011, Facebook introduced personal timelines to complement a user’s profile; timelines show the chronological placement of photos, videos, links, and other updates made by a user and his or her friends.

 

Though a user can customize their timeline as well as the kind of content and profile information that can be shared with individual users, Facebook networks rely heavily on people posting comments publically and also tagging people in photos. Tagging is a very common practice that places people and events together, though, if required, a user can always untag himself or herself.

 

Conceptually, the timeline is a chronological representation of a person’s life from birth until his or her death, or present day if you are still using Facebook. A user’s life can be broken up into pieces or categories that can be more meaningfully analyzed by the algorithms run by Facebook.

 

These categories include Work and Education, Family and Relationships, Living, Health and Wellness, and Milestones and Experiences. Each category contains four to seven subcategories. Users have granular control over who sees what content related to them, but less so about what they see in relation to other people.

 

Facebook is often accused of selling user information and not fully deleting accounts after users choose to remove them. Because Facebook has such generalized privacy policy, they can get away with handling user information in almost any way that they see fit. Facebook has done many things to improve security in recent years.

 

Facebook has provided users with a detailed list of open sessions under their account name and given them the ability to revoke them at will. This is to say that, if an unauthorized person accesses a user’s account or the user forgets to log out of a computer, they can force that particular connection to close.

 

Location and time of access are listed for each open session, so a user can easily determine if their account is being accessed from some- where unexpected.

 

When viewed through a web browser, Facebook supports https. This protocol is considered secure; however, it is not supported by mobile devices. Data transmitted by Facebook to mobile devices has been proven to be in plain text, meaning if it is intercepted it is easily human readable.

 

However, the Global Positioning System (GPS) coordinates and information about your friends require special permission.

 

Default access granted to any Facebook app includes user ID, name, profile picture, gender, age range, locale, networks, list of friends, and any information set as public. Any of this information can be transmitted between devices at any time without a user’s express permission, and, in the case of mobile devices, in plain, unencrypted text.

 

Twitter

Twitter

Twitter’s original idea was to design a system for individuals to share short SMS messages with a small group of people. Hence, tweets were designed to be short and led to a limit of 144 characters per tweet. By 2013, Twitter had 200 million users sending 500 million tweets a day.

 

Twitter was originally designed to work with text messages. This is why the 140 character limit was put into the original design, to comply with text message rates.

 

Twitter’s original design was to create a service that a person could send a text to, and that text would not only be available online but it would then be able to resend that text to other people using the service. Subsequently, Twitter has incorporated many different sources of media.

 

In 2010, Twitter added a facility for online video and photo viewing without redirection to third-party sites. In 2013, Twitter added its own music service as an iPhone app.

 

Despite Twitter’s continued expansion of supported content, the language used in modern tweets along with some other helpful additions has continued to adhere to the 140 character limit.

 

Google+

Google+

Google+ is the only social network to rival Facebook’s user base with more than a billion users. The main feature of Google+ is circles; by being part of the same circle, people create focused social networks. Circles allow networks to center around ideas and products; circles are also the way that streaming content is shared between people.

 

Circles generate content for users and help organize and segregate with whom information is shared. A user makes circles by placing other Google+ users into them. This is done through an interface built very similar to Gmail and Google maps.

 

When circles create content for a user, it is accumulated and displayed on their Stream. A user’s Stream is a prioritized list of any content from that user’s circles that they have decided to display. A user can control how much of a Circle’s content is included in their Stream. Circles can also be shared, either with individual users or other circles.

 

This action being a single timeshare means that there is no subsequent syncing after the share takes place. The lack of synchronous updates without sharing a Circle again means that it is simply very easy for others to have incorrect information about Circles that change on a regular basis.

 

If frequent updates are made and a user wants his or her network to stay up-to-date, a user may have to share a Circle quite frequently.

 

Google+ Pages are essentially profiles for businesses, organizations, publications, or other entities that are not related to a single individual. They can be added to Circles like normal users and share updates to user Streams in the same way. The real distinction is that Pages do not require a legal name to be attached to the associated Google account.

 

Google+ has a large number of additional services and support owing to its high level of integration with Google accounts including games, messenger, photo editing and saving, mobile upload and diagnostics, apps, calendars, and video streaming.

 

Hangouts, which is Google’s video-streaming application, is available free for use and supports up to 10 simultaneous users in a session. Hangouts can be used as a conference call solution or to create instant webcasts. Functionally, Hangouts is similar to programs like Skype

 

Other Social Networks

Social Networks

Here are some of the other notable social networks:

 

1. Classmates were established in 1995 by Randy Conrads as a means for class reunions and have more than 50 million registered users. By linking together people from the same school and class year, Classmates.com provides individuals with a chance to “walk down memory lane” and get reacquainted with old classmates that have also registered with the site.

 

With a minimum age limit of 18 years, registration is free and anyone may search the site for classmates that they may know. Purchasing a gold membership is required to communicate with other members through the site’s email system.

 

User email addresses are private, and communication for paying members is handled through a double-blind email system that ensures that only paying members can make full use of the site, allowing unlimited communication and orchestration of activities for events like reunions.

 

2. Friendster was launched in 2002 by Jonathan Abrams as a generic social network in Malaysia. Friendster is a social network made primarily of Asian users. Friendster was redesigned and relaunched as a gaming platform in 2011 where it would grow to its current user base of more than 115 million.

 

Friendster filed many of the fundamental patents related to social networks. Eighteen of these patents were acquired by Facebook in 2011.

 

3.hi5 is a social network developed by Ramu Yalamanchi in 2003 in San Francisco, California; and was acquired by Tagged in 2011. All of the normal social network features were included as friend networks, photo sharing, profile information, and groups. In 2009, hi5 was redesigned as a purely social gaming network with a required age of 18 years for all new and existing users.

 

Several hundred games were added, and Application Programming Interfaces (APIs) were created that include support for Facebook games. This popular change boosted hi5’s user base, and at the time of acquisition, its user base was more than 80 million.

 

4. Orkut was a social network almost identical to Facebook that was launched in 2004 and was shut down by the end of September 2014. Orkut obtained more than 100 million users, most of which were located in India and Brazil.

 

5. Flickr is a photo-sharing website that was created in 2004 and was acquired by Yahoo! in 2005; photos and videos can also be accessed via Flickr. It has tens of millions of members sharing billions of images.

 

6. YouTube is a video-sharing website that was created in 2005 and was acquired by Google in 2006. Members, as well as corporations and organizations, post videos of themselves as well as various events and talks. Movies and songs are also posted on this website.

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide.]

 

BlackBerry OS

Research In Motion (RIM) is a Canadian designer, manufacturer, and marketer of wireless solutions for the worldwide mobile communications market. Products include the BlackBerry wireless email solution, wireless handhelds, and wireless modems.

 

RIM is the driving force behind BlackBerry smartphones and the BlackBerry solution. RIM provides a proprietary multitasking OS for the BlackBerry, which makes heavy use of specialized input devices, particularly the scroll wheel or more recently the trackball.

 

BlackBerry offers the best combination of mobile phone, server software, push email, and security from a single vendor. It integrates well with other platforms, it works with several carriers, and it can be deployed globally for the sales force which is on move.

 

It is easy to manage, has a longer than usual battery life, and has a small form-factor with an easy-to-use keyboard. BlackBerry is good for access to some of the simpler applications, such as contact list, time management, and field force applications.

 

Google Android

Google’s Android Mobile platform is the latest mobile platform on the block. This open- source development platform is built on the Linux kernel, and it includes an operating system (OS), middleware stack and a number of mobile applications.

 

Enterprises will benefit from Android because the availability of open-source code for the entire software stack will allow the existing army of Linux developers to create special-purpose applications that will run on a variety of mobile devices.

 

Android is the open-source mobile OS launched by Google. It is intuitive, user- friendly and graphically similar to the iPhone and Blackberry. Being open source, the Android applications may be cheaper and the spread of the Android possibly will increase. The Kernel is based on the Linux v 2.6 and supports 2G, 3G, Wi-Fi, IPv4, and IPv6.

 

At the multimedia level, Android works with OpenGL and several images, audio, and video formats. The persistence is assured with the support of the SQLite. Regarding security, Android uses SSL and encryption algorithms.

 

If Android makes it into phones designed specifically for the enterprise, those products will have to include technology from the likes of Sybase, Intellisync or another such company to enable security features like remote data wipe functionality and forced password changes.

 

Apple iOS

iPhone OS is the Apple proprietary OS used in the Macintosh machines; an optimized version is used in the iPhone and iPod Touch.

The simplicity and robustness provided either in the menus navigation or in the application’ navigation are two of the main potentialities of the OS. iPhone OS is also equipped with good quality multimedia software, including games, music, and video players. It has also a good set of tools including imaging editing and word processor.

 

Windows Phone

The Windows Mobile, a variant of the Windows CE (also known officially as Windows Embedded Compact), was developed for the Pocket PCs at the beginning but arises by 2002 to the HTC2 mobile phones. This OS was engineered to offer data and multimedia services.

 

By 2006, Windows Mobile becomes available for the developer's community. Many new applications started using the system, turning Windows Mobile into one of the most used systems

 

Windows Mobile comes in two flavors. A smartphone edition is good for wireless email, calendaring, and voice notes. A Pocket PC edition adds mobile versions of Word, Excel, PowerPoint, and Outlook. Palms Treo700w, with the full functionality of the Pocket PC edition, is a better choice for sales force professionals.

 

The main draw of the Windows Mobile operating system is its maker Microsoft. Windows Mobile also actively syncs to the Exchange and SQL servers. This augurs very well for use by the sales force.

 

Mobile sales force solutions for Windows Mobile are available from companies like SAP, Siebel, PeopleSoft, and Salesforce.com as well as other leading solution providers.

 

Windows Mobile permits Bluetooth connections through the interface Winsock. It also allows 902.11x, IPv4, IPv6, VoIP (Voice over IP), GSM, and CDMA (Code Division Multiple Access) connections.

 

Some of the main applications available are the Pocket Outlook (adapted version of the Outlook for Desktops), Word, and Excel. It provides also Messenger, Browser, and remote desktop.

 

The remote desktop is an easy way of access to other mobile or fixed terminals. ActiveSync application facilitates the synchronization between the mobile devices and the desktops.

 

At the multimedia level, Windows Mobile reproduces music, video, and 3D applications. Security is also a concern, so Secure Socket Layer (SSL), Kerberos, and the use of encryption algorithms are available.

 

Mobile Web Services

Mobile Web Services

Web services are the cornerstone toward building a globally distributed information system, in which many individual applications will take part; building a powerful application whose capability is not limited to local resources will unavoidably require interacting with other partner applications through web services across the Internet.

 

The strengths of web services come from the fact that web services use XML and related technologies connecting business applications based on various computers and locations with various languages and platforms. The counterpart of the WS in the context of mobile business processes would be Mobile Web Services (MWS).

 

The proposed MWS is to be the base of the communications between the Internet network and wireless devices such as mobile phones, PDAs, and so forth. The integration of wireless device applications with other applications would be a very important step toward global enterprise systems.

 

Similar to WS, MWS is also based on the industry- standard language XML and related technologies such as SOAP, WSDL, and UDDI.

 

Many constraints make the implementation of WS in a mobile environment very challenging. The challenge comes from the fact that mobile devices have smaller power and capacities as follows:

 

  • Small power limited to a few hours
  • Small memory capacity
  • Small processors not big enough to run larger applications
  • Small screen size, especially in mobile phones, which requires developing specific websites with suitable size
  • The small keypad that makes it harder to enter data
  • Small hard disk

 The speed of the data communication between the device and the network and that varies

 

The most popular MWS is a proxy-based system where the mobile device connects to the Internet through a proxy server. Most of the processing of the business logic of the mobile application will be performed on the proxy server that transfers the results to the mobile device that is mainly equipped with a user interface to display output on its screen.

 

The other important advantage a proxy server provides in MWS is, instead of connecting the client application residing on the mobile device to many service providers and consuming most of the mobile processor and the bandwidth, the proxy will communicate with service providers, do some processing, and send back only the final result to the mobile device.

 

In the realistic case where the number of mobile devices becomes in the range of tens of millions, the proxy server would be on the cloud and the service providers would be the cloud service providers.

 

Mobile web services use existing industry-standard XML-based web services architecture to expose mobile network services to the broadest audience of developers.

 

Developers will be able to access and integrate mobile network services such as messaging, location-based content delivery, syndication, personalization, identification, authentication, and billing services into their applications.

 

This will ultimately enable solutions that work seamlessly across stationary networks and mobile environments. Customers will be able to use mobile web services from multiple devices on both wired and wireless networks.

 

The aim of the mobile web services effort is twofold:

Mobile web services

1. To create a new environment that enables the IT industry and the mobile industry to create products and services that meet customer needs in a way not currently possible within the existing web services practices.

 

With web services being widely deployed as the SOA of choice for internal processes in organizations, there is also an emerging demand for using web services enabling mobile working and e-business.

 

By integrating Web Services and mobile computing technologies, consistent business models can be enabled on a broad array of endpoints: not just on mobile devices operating over mobile networks but also on servers and computing infrastructure operating over the Internet.

 

To make this integration happen at a technical level, mechanisms are required to expose and leverage existing mobile network services.

 

Also, practices for how to integrate the various business needs of the mobile network world and their associated enablers such as security must be developed. The result is a framework, such as the Open Mobile Alliance, that demonstrates how the web service specifications can be used and combined with mobile computing technology and protocols to realize practical and interoperable solutions.

 

Successful mobile solutions that help architect customers’ service infrastructures need to address security availability and scalability concerns both at the functional level and at the end-to-end solution level, rather than just offering fixed- point products.

 

What is required is a standard specification and an architecture that tie together service discovery, invocation, authentication, and other necessary components—thereby adding context and value to web services.

 

In this way, operators and enterprises will be able to leverage the unique capabilities of each component of the end-to-end network and shift the emphasis of service delivery from devices to the human user.

 

Using a combination of wireless, broadband, and wireline devices, users can then access any service on demand, with a single identity and single set of service profiles, personalized service delivery as dictated by the situation.

 

There are three important requirements to accomplish user (mobile- subscriber)-focused delivery of mobile services: federated identity, policy, and federated context. Integrating identity, policy, and context into the overall mobile services architecture enables service providers to differentiate the user from the device and deliver the right service to the right user on virtually any device:

 

a. Federated identity: In a mobile environment, users are not seen as individuals (e.g., mobile subscribers) to software applications and processes who are tied to a particular domain, but rather as entities that are free to traverse multiple service networks.

 

This requirement demands a complete federated network identity model to tie the various personas of an individual without compromising privacy or loss of ownership of the associated data.

 

The federated network identity model allows the implementation of seamless single sign-on for users interacting with applications.

 

It also ensures that user identity, including transactional information and other personal information, is not tied to a particular device or service, but rather is free to move with the user between service providers. Furthermore, it guarantees that only appropriately authorized parties are able to access protected information.

 

b. Policy: User policy, including roles and access rights, is an important requirement for allowing users not only to have service access within their home network but also to move outside it and still receive the same access to services.

 

Knowing who the user is and what role they fulfill at the moment they are using a particular service is essential to providing the right service in the right instance. The combination of federated identity and policy enables service providers and users to strike a balance between access rights and user privacy

 

c. Federated context: Understanding what the user is doing, what they ask, why it is being requested, where they are, and what device they are using is an essential requirement.

 

The notion of federated context means accessing and acting upon a user’s current location, availability, presence, and role, for example, at home, at work, on holiday, and other situational attributes.

 

This requires the intelligent synthesis of information available from all parts of the end-to-end network and allows service providers and enterprises to deliver relevant and timely applications and services to end users in a personalized manner.

 

For example, information about the location and availability of a user’s device may reside on the wireless network, the user’s calendar may be on the enterprise intranet, and preferences may be stored in a portal.

 

2. To help create web services standards that will enable new business opportunities by delivering integrated services across stationary (fixed) and wireless networks. Mobile web services use existing industry-standard XML-based web services architecture to expose mobile network services to the broadest audience of developers.

 

Developers will be able to access and integrate mobile network services such as messaging, location-based content delivery, syndication, personalization, identification, authentication, and billing services into their applications.

 

This will ultimately enable solutions that work seamlessly across stationary networks and mobile environments. Customers will be able to use mobile web services from multiple devices on both wired and wireless networks.

 

Delivering appealing, low-cost mobile data services, including ones that are based on mobile Internet browsing and mobile commerce, is proving increasingly difficult to achieve.

 

The existing infrastructure and tools as well as the interfaces between Internet/ web applications and mobile network services remain largely fragmented, characterized by tightly coupled, costly, and close alliances between value-added service providers and a complex mixture of disparate and sometimes overlapping standards (WAP, MMS, Presence, Identity, etc.) and proprietary models (e.g., propriety interfaces).

 

This hinders interoperability solutions for the mobile sector and at the same time drives up the cost of application development and ultimately the cost of services offered to mobile users.

 

Such problems have given rise to initiatives for standardizing mobile web services. The most important of these initiatives is the Open Mobile Alliance and the mobile web services frameworks that are examined below.

 

Mobile Field Cloud Services

Cloud Services

Companies that can outfit their employees with devices like PDAs, laptops, multifunction smartphones, or pagers will begin to bridge the costly chasm between the field and the back office.

 

For example, transportation costs for remote employees can be significantly reduced, and productivity can be significantly improved by eliminating needless journeys back to the office to file reports, collect parts, or simply deliver purchase orders.

 

Wireless services are evolving toward the goal of delivering the right cloud service to whoever needs it, for example, employees, suppliers, partners, and customers, at the right place, at the right time, and on any device of their choice.

 

The combination of wireless handheld devices and cloud service delivery technologies poses the opportunity for an entirely new paradigm of information access that in the enterprise context can substantially reduce delays in the transaction and fulfillment process and lead to improved cash flow and profitability.

 

A field cloud services solution automates, standardizes, and streamlines manual processes in an enterprise and helps centralize disparate systems associated with customer service life-cycle management including customer contact, scheduling and dispatching, mobile workforce communications, resource optimization, work order management, time, labor, material tracking, billing, and payroll.

 

A field web services solution links seamlessly all elements of an enterprise’s field service operation—customers, service engineers, suppliers, and the office—to the enterprise’s stationary infrastructure, wireless communications, and mobile devices. Field web services provide real-time visibility and control of all calls and commitments, resources, and operations.

 

They effectively manage business activities such as call taking and escalation, scheduling and dispatching, customer entitlements and SLAs, work orders, service contracts, time sheets, labor and equipment tracking, invoicing, resource utilization, reporting, and analytics.

 

Of particular interest to field services are location-based services, notification services, and service disambiguation as these mechanisms enable developers to build more sophisticated cloud service applications by providing accessible interfaces to advanced features and intelligent mobile features:

 

1. Location-based services provide information specific to a location using the latest positioning technologies and are a key part of the mobile web services suite. Dispatchers can use GPS or network-based positioning information to determine the location of field workers and optimally assign tasks (push model) based on geographic proximity.

 

Location-based services and applications enable enterprises to improve operational efficiencies by locating, tracking, and communicating with their field workforce in real time.

 

For example, location-based services can be used to keep track of vehicles and employees, whether they are conducting service calls or delivering products. Trucks could be pulling in or out of a terminal, visiting a customer site, or picking up supplies from a manufacturing or distribution facility.

 

With location-based services, applications can get such things such as real-time status alerts, for example, estimated time of approach, arrival, departure, duration of the stop, current information on traffic, weather, and road conditions for both home-office and en route employees.

 

2. Notification services allow critical business to proceed uninterrupted when employees are away from their desks, by delivering notifications to their preferred mobile device. Employees can thus receive real-time notification when critical events occur, such as when incident reports are completed.

 

The combination of location-based and notification services provides added value by enabling such services as proximity-based notification and proximity-based actuation.

 

Proximity-based notification is a push or pulls interaction model that includes targeted advertising, automatic airport check-in, and sightseeing information. Proximity-based actuation is a push-pull interaction model, whose most typical example is payment based on proximity, for example, toll watch.

 

3. Service instance disambiguation helps distinguish between many similar candidate service instances, which may be available inside close perimeters. For instance, there may be many on-device payment services in the proximity of a single point of sale.

 

Convenient and natural ways for identifying appropriate service instances are then required, for example, relying on closeness or pointing rather than identification by cumbersome unique names.

 

Context-Aware Mobile Applications

Mobile Applications

A mobile application is context-aware if it uses context to provide relevant information to users or to enable services for them; relevancy depends on a user’s current task (and activity) and profile (and preferences).

 

Apart from knowing who the users are and where they are, we need to identify what they are doing, when they are doing it, and which object they focus on. The system can define user activity by taking into account various sensed parameters like location, time, and the object that they use.

 

In outdoor applications, and depending on the mobile devices that are used, satellite-supported technologies, like GPS, or network-supported cell information, like GSM, IMTS, and WLAN, is applied. Indoor applications use RFID, IrDA, and Bluetooth technologies in order to estimate the users’ position in space.

 

While time is another significant parameter of context that can play an important role in order to extract information on user activity, the objects that are used in mobile applications are the most crucial context sources.

 

In mobile applications, the user can use mobile devices, like mobile phones and PDAs and objects that are enhanced with computing and communication abilities. Sensors attached to artifacts provide applications with information about what the user is utilizing.

 

In order to present the user with the requested information in the best possible form, the system has to know the physical properties of the artifact that will be used (e.g., artifact screen’s display characteristics).

 

The types of interaction interfaces that an artifact provides to the user need to be modeled (e.g., whether artifact can be handled by both speech and touch techniques), and the system must know how it is designed.

 

Thus, the system has to know the number of each artifact’s sensors and their position in order to graduate context information with a level of certainty. Based on information on the artifact’s physical properties and capabilities, the system can extract information on the services that they can provide to the user.

 

In the context-aware mobile applications, artifacts are considered as content providers. They allow users to access context in a high-level abstracted form, and they inform other application’s artifacts so that context can be used according to the application needs.

 

Users are able to establish associations between the artifacts based on the context that they provide; keep in mind that the services enabled by artifacts are provided as context. Thus, users can indicate their preferences, needs, and desires to the system by determining the behavior of the application via the artifacts they create.

 

The set of sensors attached to an artifact measure various parameters such as location, time, temperature, proximity, and motion—the raw data given by its sensors determine the low-level context of the artifact. The aggregation of such low-level context information from various homogeneous and non- homogeneous sensors results in high-level context information.

 

Ontology-Based Context Model

Ontology-Based Context Model

This ontology is divided into two layers: a common one that contains the description of the basic concepts of context-aware applications and their interrelations representing the common language among artifacts, and a private one that represents an artifact’s own description as well as the new knowledge or experience acquired from its use.

 

The common ontology defines the basic concepts of a context-aware application; such an application consists of a number of artifacts and their associations.

 

The concept of artifact is described by its physical properties and its communication and computational capabilities; the fact that an artifact has a number of sensors and actuators attached is also defined in our ontology.

 

Through the sensors, an artifact can perceive a set of parameters based on which the state of the artifact is defined; an artifact may also need these parameters in order to sense its interactions with other artifacts as well as with the user.

 

The ontology also defines the interfaces via which artifacts may be accessed in order to enable the selection of the appropriate one. The common ontology represents an abstract form of the concepts represented, especially of the context parameters, as more detailed descriptions are stored into each artifact’s private ontology.

 

For instance, the private ontology of an artifact that represents a car contains a full description of the different components in a car as well as their types and their relations.

 

The basic goal of the proposed ontology-based context model is to support a content management process, based on a set of rules that determine the way in which a decision is made and are applied to existing knowledge represented by this ontology.

 

The rules that can be applied during such a process belong to the following categories: rules for an artifact’s state assessment that define the artifact’s state based on its low- and high-level contexts, rules for local decisions that exploit an artifact’s knowledge only in order to decide the artifact’s reaction (like the request or the provision of a service).

 

And finally rules for global decisions that take into account various artifacts’ states and their possible reactions in order to preserve a global state defined by the user.

 

Context Support for User Interaction

User Interaction

The ontology-based context model that we propose empowers users to compose their own personal mobile applications. In order to compose their applications, they first have to select the artifacts that will participate and establish their associations.

 

They set their own preferences by associating artifacts, denoting the sources of context that artifacts can exploit, and defining the interpretation of this context through rules in order to enable various services.

 

As the context acquisition process is decoupled from the content management process, users are able to create their own mobile applications avoiding the problems emerging from the adaptation and customization of applications like disorientation and system failures.

 

The goal of context in computing environments is to improve interaction between users and applications. This can be achieved by exploiting context, which works like implicit commands and enables applications to react to users or surroundings without the users’ explicit commands.

 

Context can also be used to interpret explicit acts, making interaction much more efficient. Thus, context-aware computing completely redefines the basic notions of interface and interaction.

 

In this section, we present how our ontology-based context model enables the use of context in order to assist human-computer interaction in mobile applications and to achieve the selection of the appropriate interaction technique. Mobile systems have to provide multimodal interfaces so that users can select the most suitable technique based on their context.

 

The ontology-based context model that we presented in the previous section captures the various interfaces provided by the application’s artifacts in order to support and enable such selections. Similarly, the context can determine the most appropriate interface when a service is enabled.

 

Ubiquitous and mobile interfaces must be proactive in anticipating needs, while at the same time working as a spatial and contextual filter for information so that the user is not inundated with requests for attention.

 

Context can also assist designers to develop mobile applications and manage various interfaces and interaction techniques that would enable more satisfactory and faster closure of transactions.

 

Easiness is an important requirement for mobile applications; by using context according to our approach, designers are abstracted from the difficult task of context acquisition and have merely defined how context is exploited from various artifacts by defining simple rules.

 

Our approach presents an infrastructure capable of handling, substituting, and combining complex interfaces when necessary. The rules applied to the application’s context and the reasoning process support the application’s adaptation.

 

The presented ontology-based context model is easily extended; new devices, new interfaces, and novel interaction techniques can be exploited into a mobile application by simply incorporating their descriptions in the ontology.

 

Mobile Web 2.0

Mobile Web

Mobile Web 2.0 results from the convergence of the Web 2.0 services and the proliferation of web-enabled mobile devices. Web 2.0 enables to facilitate interactive information sharing, interoperability, user-centered design, and collaboration among users.

 

This convergence is leading to a new communication paradigm, where mobile devices act not only as mere consumers of information but also as complex carriers for getting and providing information, and as platforms for novel services.

 

Mobile Web 2.0 represents both an opportunity for creating novel services and an extension of Web 2.0 applications to mobile devices.

 

The management of user-generated content, of content personalization, of community and information sharing, is much more challenging in a context characterized by devices with limited capabilities in terms of display, computational power, storage, and connectivity.

 

Furthermore, novel services require support for real-time determination and communication of the user position.

 

Mobile Web 2.0 is constituted of the following:

Mobile Web 2.0

1. Sharing services that are characterized by the publication of contents to be shared with other users. Sharing services offer users the capability to store, organize, search, and manage heterogeneous contents.

 

These contents may be rated, commented, tagged, and shared with specified users or groups that can usually visualize the stored resources chronologically, by category, rating or tags, or via a search engine.

 

Multimedia sharing services are related to sharing of multimedia resources, such as photos or videos. These resources are typically generated by the users that exploit the sharing service to upload and publish their own contents. Popular examples of web portals offering a multimedia sharing service include Flickr, YouTube, and Mocospace.

 

2. Social services that refer to the management of social relationships among users. This is constituted of services such as Community management services enable registered users to maintain a list of contact details of people they know.

 

Their key feature is the possibility to create and update a personal profile including information such as user preferences and his list of contacts.

 

These contacts may be used in different ways depending on the purpose of the service, which may range from the creation of a personal network of business and professional contacts (e.g., LinkedIn), to the management of social events (e.g., Meetup), and up to the connection with old and new friends (e.g., Facebook).

 

Blogging services enable a user to create and manage a blog, that is, a sort of personal online journal, possibly focused on a specific topic of interest. Blogs are usually created and managed by an individual or a limited group of people, namely author(s), through regular entries of heterogeneous content, including text, images, and links to other resources related to the main topic, such as other blogs, web pages, or multimedia contents.

 

A blog is not a simple online journal, because the large majority of them allow external comments on the entries. The final effect is the creation of a discussion forum that engages readers and builds a social community around a person or a topic.

 

Other related services may also include blogrolls (i.e., links to other blogs that the author reads) to indicate social relationships to other bloggers. Among the most popular portals that allow users to manage their own blog, we cite BlogSpot, Wordpress, and so on.

 

Microblogging services is characterized by very short message exchanges among the users. Although this class of services originates from the blogging category, there are important differences between microblogging and traditional blogs, namely, the size of the exchanged messages is significantly smaller.

 

The purpose of microblogging is to capture and communicate instantaneous thoughts or feeling of the users, and the recipient of the communication may differ from that of traditional blogs because microblogging allows authors to interact with a group of selected friends. Twitter is an example of portals providing microblogging services.

 

3. Location services that tailor information and contents on the basis of the user location. The knowledge of the user current location may be exploited in several ways to offer value-added services.

 

People discovery services that enable locating user friends; usually these services plot the position of the user and his/her friends on a map; the geographical location of the users is uploaded to the system by means of a positioning system installed on the user mobile devices.

 

Points of interest (POIs) discovery exploits geographical information to locate POIs, such as events, restaurants, museums, and any kind of attractions that may be useful or interesting to a user. These services offer the users a list of nearby POIs selected on the basis of their personal preferences and specifications.

 

POIs are collected by exploiting collaborative recommendations from other users that may add a new POI by uploading its geographical location, possibly determined through a GPS positioning system installed on the mobile device. Users may also upload short descriptions, comments, tags, and images or videos depicting the place.

 

Mobile Analytics

Mobile Analytics

The objectives of mobile analytics are twofold: prediction and description—prediction of unknown or future values of selected variables, such as interests or location of mobiles, and description in terms of human behavior patterns.

 

Description involves gaining “insights” into mobile behaviors, whereas prediction involves improving decision making for brands, marketers, and enterprises.

 

This can include the modeling of sales, profits, the effectiveness of marketing efforts, and the popularity of apps and a mobile site. The key is to realize the data that is being aggregated and how to not only create and issue metrics on mobile activity but more importantly, how to leverage it via the data mining of mobile devices to improve sales and revenue.

 

For years, retailers have been testing new marketing and media campaigns, new pricing promotions, and the merchandising of new products with freebies and half-price deals, as well as a combination of all of these offers, in order to improve sales and revenue.

 

With mobiles, it has become increasingly easy to generate the data and metrics for mining and precisely calibrating consumer behaviors.

 

Brands and companies leveraging mobile analytics can be more adept at identifying, co-opting, and shaping consumer behavior patterns to increase profits. Brands and mobile marketers that figure out how to induce new habits can enhance their bottom lines. Inducing a new habit loop can be used to introduce new products, services, and content via the offer of coupons or deals based on the location of mobiles.

 

Mobile Site Analytics

Mobile site analytics can help the brand and companies solve the mystery of how mobile consumers are engaging and interacting with their site.

 

Without dedicated customer experience metrics, brands, marketers, and companies cannot tell whether the mobile site experience actually got better or how changes in the quality of that experience affected the site’s business performance.

 

Visitors tend to focus on three basic things when evaluating a mobile site: usefulness, ease-of-use, and how enjoyable it is. Metrics should measure these criteria with completion rates and survey questions.

 

Mobile Clustering Analysis

Clustering is the partition of a dataset into subsets of “similar” data, without using a priori knowledge about properties or existence of these subsets.

 

For example, clustering analysis of mobile site visitors might discover a high propensity for Android devices to make higher amounts of purchases of, say, Apple mobiles. Clusters can be mutually exclusive (disjunct) or overlapping. Clustering can lead to the autonomous discovery of typical customer profiles.

 

Clustering detection is the creation of models that find mobile behaviors that are similar to each other; these clumps of similarity can be discovered using SOM software to find previously unknown patterns in mobile datasets.

 

Unlike classification software, which analyzes for predicting mobile behaviors, clustering is different in that the software is “let loose” on the data; there are no targeted variables. Instead, it is about exploratory autonomous knowledge discovery.

 

The clustering software automatically organizes itself around the data with the objective of discovering some meaningful hidden structures and patterns of mobile behaviors.

 

This type of clustering can be done to discover keywords or mobile consumer clusters, and it is a useful first step for mining mobiles. It allows for the mapping of mobiles into distinct clusters of groups without any human bias.

 

Clustering is often performed as a prelude to the use of classification analysis using rule- generating or decision-tree software for modeling mobile device behaviors.

 

Market basket analysis using a SOM is useful in situations where the marketer or brand wants to know what items or mobile behaviors occur together or in a particular sequence or pattern.

 

The results are informative and actionable because they can lead to the organization of offers, coupons, discounts, and the offering of new products or services that prior to the analysis were unknown.

 

Clustering analyses can lead to answers to such questions as for why do products or services sell together, or who is buying what combinations of products or services; they can also map what purchases are made and when. Unsupervised knowledge discovery occurs when one cluster is compared to another and new insight is revealed as to why.

 

For example, SOM software can be used to discover clusters of locations, interests, models, operating systems, mobile site visitors, and app downloads, thus enabling a marketer or developer to discover unique features of different consumer mobile groupings.

Recommend