This blog is my Perspective and Outlook (invited) that appears in the
Internet of Anything (IoA) theme issue of IEEE CS IT Professional May-June issue, guest edited by Irena Bojanova, Jeff Voas, George Hurlburt. Video to go along with this article (same content) is here.


Everyone reading this will be aware of the explosive growth of sensors and devices that communicate, or the Internet of Things—IoTs for short. IoTs now cover virtually every aspect of human interests and existence. They are within our body, on our body, observing our activities, monitoring and reporting on our appliances, houses, and buildings, our cars and environment, and many facets of our cities, planet, ocean, and space. They are starting to play a role in our health, fitness and well-being, our comfort and entertainment, our financial activities, and many other facts of life.

The pace of developing new types of sensors and devices is quite rapid already. Data that IoTs create is accessible through the Internet, so accessing and delivering this data is not a big challenge. However, since 2008 we have lost the capacity to store all the data we generate.  Therefore, there is one particular challenge we face: Do we have the capacity to analyze all this data in a timely manner in order to determine if the data is of interest/value to anyone for a specific purpose? According to one estimate, only 0.5% of all data gets analyzed today, and that figure is certain to go down!

There are some near-term interoperability and middleware challenges to achieving interoperability at the device, networking, and data exchange levels. These issues can be addressed based on our experiences with similar challenges from the past. For example, Samsung and Google’s collaborat­­­ion on a low-power wireless network called Thread uses Bluetooth Smart to connect one device to another. Samsung, Dell, and Intel’s effort on the Open Interconnect Consortium is working to connect any device with one another, regardless of the operating system, connection provider, or form factor.

However, the challenge of interoperating and integrating the data and information is more the important and more demanding task. To that end, one effort called Semantic Gateway as Service (SGS) [1] allows for translation between a variety of IoT messaging protocols in current use, such as XMPP, CoAP and MQTT. Another important interoperability capability is provided by W3C's Semantic Sensor Network [2] (SSN) ontology and annotation framework. It is useful to describe any sensor/device and its data in a standard form and support semantic annotations of sensor data, making that data more meaningful. In essence, this provides semantic interoperability between messages carrying IoT data. W3C has paired up with the Open Geospatial Consortium to make an international standard with SSN as the primary input.

Even a bigger challenge will be for those who are recipients of all this databoth humans and machines, including software agents. How would all this data find it’s way to those who can consume and benefit from this data in a timely manner? How could we prevent massive data and information overload?

Today, everyone is looking for everything to be smart. We have all heard of the terms smart watch, smart home, smart building, smart car, smart city, smart grid, and smart nation.) IoT technology will play a crucial role for all of these. After all, as Tim O’Reilly notes, IoT is more about human augmentation [3], or about Computing for Human Experience [4], a term I had used.

I would add to IoT data, all the data, collective intelligence (as in Wikipedia), and knowledge we find on the Web, as well as relevant explicit or implicit social interactions, including those enabled by social media. Collectively, what we have is physical, cyber, and social data (http://wiki.knoesis.org/index.php/PCS) that all play a role in helping humans gain better insights and actionable intelligence.

Humans are ill-equipped to deal with the massive amounts of data coming their way.  What we need is highly contextualized and personalized information that is also actionable. I call this Smart Data (http://wiki.knoesis.org/index.php/Smart_Data) a term initially proposed in 2004 but which is increasingly making sense in conveying how all the volume, variety, velocity, and veracity challenges of physical, cyber, social Big Data needs to be managed to derive value out of them.

By marrying Smart Data with IoT, we will get Smart IoT. Such Smart IoTs would then take up a role as a human agent, or become a human extension and human complement. Consider the human brain’s ability to simultaneously and in real-time consume data of different modalities, such as text, images, speech, and video, then process it using his/her knowledge, experiences, and preferences to achieve what we call human cognition and perception.

As our ability to create Smart Data advances, we will similarly see more abilities on the part of machines to intelligently filter just that data which is needed to meet its human master’s needs, assimilate all forms of contextually relevant data, personalize it by factoring in a user’s preferences and needs, and present the results at a level of abstraction that is ready for a human to act upon.

Initially we will see more intelligence in the computing environment that will process IoT data, but eventually we will see IoTs themselves becoming smart or intelligent, complementing some of the exciting advances we see now in robotics. So here is the take away: IoT and AI, especially semantic, cognitive, and perceptual computing will come together to create Smart IoT that will act as a human agent, human extension, and human complement.

[1] P. Desai, A. ShethP. Anantharam, Semantic Gateway as a Service architecture for IoT Interoperability, arxiv, Oct 18, 2014, http://xxx.tau.ac.il/abs/1410.4977.

[2] M. Compton, et. al. The SSN ontology of the W3C semantic sensornetwork incubator group, Journal of Web Semantics, Volume 17, December 2012.

[3] T. O’Reilly, #IoTH: The Internet of Things and Humans, O’Reilly Radar, April 16, 2014.

[4] A. Sheth, "Computing for Human Experience: Semantics-Empowered Sensors, Services, and Social Computing on the Ubiquitous Web", IEEE Internet Computing, 14(1), January/February 2010, doi:10.1109/MIC.2010.4

Citation: Ram D. Sriram, Amit Sheth, "Internet of Things Perspectives", IT Professional, vol.17, no. 3, pp. 60-63, May-June 2015, doi:10.1109/MITP.2015.43

  1. Note: This content will appear as a guest Career Advice Column in the ACM XRDS magazine.

    I have had significant involvement in advising or mentoring graduate students in Computer Science-- especially, Ph.D. (graduated 28 so far) and MS-Thesis (graduated about 30 so far). Outcomes of these graduates have been exceptional - all my Ph.D. graduates have successfully competed with their counterparts from top 20 schools for academic, research lab or other hi-tech positions they have obtained. I have reviewed in a presentation what I believe is the ecosystem that has helped with this outcome.

    This article summarizes the discussions I have with my students to help them understand the path they would need to take to succeed in the face of very tough competitions, especially for the most exciting and high demand jobs. The discussions center around the question: What is it that you have that others don’t? I recall this question from the 2013 Academy Award Winning movie ‘The Artist’. In that movie, a young actress goes to a seasoned actor for an advice on how to win one of the few good roles as she competes with all the other seemingly equally capable young actresses. The advice she gets is to answer for herself the above question. Below I rephrase the considerations I offer to my students. In doing so I also borrow from my answer on Quora to the question about the life of a Ph.D. student at a top school. In my answer, I indicate that ranking is highly overrated, and often it is the advisor who plays a lot more significant role than the department or the university in the success of a research student. I also point out Malcolm Gladwell’s talk at Google where he gives a convincing account of how top students at mediocre schools consistently have better outcomes than a middle-of-the-pack student at a top school.. In my view, while traditional research outcomes, as measured by publications in good venues (minimum three in top venues in your field) are necessary, it is far from sufficient. More successful students‘ preparations nowadays include:
    • Exceptional teamwork: It is important to learn to appreciate collaborative work, learn from (be mentored by) senior students or postdocs in the group (in addition to the faculty supervisor, of course) and later mentor junior students (junior Ph.D. or M.S. thesis students) as part of a collaboration. Learn to appreciate different skills others in the group bring. 
    • Good development skills: Increasing number of industry jobs, a candidate will be subjected to 2 to 4 rounds of engineering interviews and most will have to refresh fundamentals! I recommend students to build tools and occasionally open source them on GitHub; participate in competition (e.g., Kaggle) and hackathons; participate in or even lead in an international standardization effort or significant dissemination activities. 
    • Always ask Why: When framing your research or innovation, what and how are not enough. In fact, without a good answer for “why”, it is unlikely you will succeed in your research or about any other activity. 
    • Interdisciplinary work is more important than ever before: Computer science is evolving as a service discipline. Increasingly, for any major or impactful problem in any other discipline, computer science skills and tools are indispensable. However, it is hard to collaborate with experts in other fields at an arm's length. It is necessary to cultivate a deep appreciation of the field in which you are solving the problem. 
    • Refine your soft skills and be social: Soft skills are as important as hard (technical) skills: These include the ability to communicate very well (i.e., have a command both written and spoken language), network (make contacts -- e.g., connect on LinkedIn with anyone you meet at a conference), and have good overview of areas that are not your area of expertise (i.e., keep up to date on major advances in computer science and beyond, and not just be aware of publications in one or two conferences). I maintain a library of books in not just computer science, but also other fields of our broader interest (e.g., neuroscience, cognitive science, behavioral economics, etc.) and expect my students to read as many books as possible. Attend or participate in advisor/lab group meeting, meet visitors and advisor/lab's guests, and occasionally join for lunch or dinner when invited by the advisor. 
    • Learn how to obtain resources for what you want to do: If the advisor allows, help the advisor in writing one or two proposals. Understand how to compete for the resources you will need to conduct your research or innovative work in future and the associated soft marketing skills. If you are planning to join academia, be sure to understand various funding agencies (e.g., in USA: NIH, NSF, etc.). 
    • Learn to review papers so you know how others would evaluate your technical work. Serve on program committees (often for top conferences--this is more possible if your advisor gets a lot of invitations to serve on PCs) and review conference/journal papers (both from your own and your advisor's PC participation). 
    • Lead or participate in project team meetings and do a myriad of activities involved in carrying out a project (e.g., meet with collaborators who may be in another disciple as an increasing number of projects are interdisciplinary and some of them are from other institutions). 
    • Prepare for and later go for internships (each of my Ph.D. students does three or four internships during typical 5 to 6 years of Ph.D.). This is your chance to learn from other mentors and get a sense of what it is to compete in the real world. When doing internships at top companies, you will get to observe what it takes to work there and succeed. 
    • Try to present a tutorial, co-organize a workshop at a major conference in your area (a majority of my students do). This will require that you are at the top of all major happenings in your field. It also gives excellent networking opportunity. Alternatively, write a survey paper (perhaps with your advisor and colleagues). While this is a very demanding endeavor, good surveys tend to garner high visibility and many citations. 

    I hope the above observations get you to think more and plan your journey in computer science education and training. I recommend using Quora if you think you want to benefit from others’ experiences. I have addressed many questions for computer science students through my Quora answers.

  2. Jennifer Zaino of Dataversity polls some members of the community and writes a look back/look ahead the end of the year article on the topic of semantic web/linked and smart data. Here are her questions and my answers this year. Parts of this are captured in her article: 2017 Trends for Semantic Web and Semantic Technologies, Dataversity, Nov. 29, 2016

    What was the most significant event/development/news in semantic web/linked data/smart data in 2016?

    In my view, the basic trend in the attention Semantic Web gets has not changed. If at all, the term “Semantic Web” or “Semantic Web technology” is receiving even less attention (see the statistics on the popularity of “Semantic Web” over the recent past). The reason is this — the main use of Semantic Web in my view is to improve interoperability/integration, understanding, and exploitation of data/content. And the main enabler of this is the use of ontologies, especially the populated ontologies, or in other words, the broad-based and domain-specific knowledge. And what has been happening is that AI, which is a far bigger field and has a lot more followers and practitioners, has recognized that background knowledge (again both broad-based and domain-specific) is increasingly key to further improving machine learning and NLP.  In other words, AI, with its much larger footprint in research and practice has realized that knowledge will propel machine understanding of (diverse) content. In this context, while the Semantic Web standards are doing no better or worse compared to recent years, the core of value proposition of the Semantic Web is being co-opted by or swallowed by the bigger area of AI.

    As for Linked Data, there is limited new progress. Just putting more data in the linked data cloud does not add more value—indifferent quality, limited interlinking, and limited expressiveness of mappings between related data hinder broader adoption—while a few datasets that are extracted from actively maintained repositories (e.g., DBPedia from Wikipedia) and highly curated data continue to have the lion’s share of applications. One laudable exception of this is Cognoto, which is in the process of integrating six large public knowledge bases to benefit some machine learning applications

    The first recorded use of the term Smart Data was likely in 2004 in the context of semantic and knowledge-enabled processing of diverse data.  Following the popularity of Big Data, I reused it in terms of deriving value while harnessing volume, velocity, variety, and veracity. From the industrial perspective, “smart data” has rapidly gained in usage, but now it is being used to mean a lot of different things, which perhaps dilutes its importance. This is analogous to the term “big data,”  which has undertaken very diffused meaning and usage.

    What semantic web/linked data/smart data 2016 expectations you had were not fulfilled?

    My expectations are pretty much in line with what I had expected in the views I had shared in previous years: slow progress in broader adoption by industry of Semantic Web standards, struggle with technical challenges hindering linked data usage, and faster adoption of semantic techniques (not necessarily using Semantic Web standards) especially involving building and uses of knowledge graphs. One key challenge that continues to hinder more rapid adoption of semantic web and linked data is the lack of robust yet very easy to use tools when dealing with large and diverse data that can do what tools like Weka did for machine learning.

    What are your top three expectations for semantic web/linked data/smart data events/milestones developments in 2017?
                    And why?

    Given the tremendous success of machine learning and “bottom-up” data processing (emphasizing learning from data), I expect to see increasing emphasis on developing knowledge graphs and using them for “top-down” (emphasizing the use of models or background knowledge) or “middle-out” processing in conjunction with the “bottom-up” processing. While everyone is using DBPedia and a few high quality, broad-based knowledge bases, or in domain-specific applications like health, they are using well-curated knowledge bases like UMLS, I see more and more companies investing in developing their own knowledge graphs as their investment in intellectual property. A good example is the Google Knowledge Graph, which has grown from a fairly modest size based on Freebase to one that is much larger. However, this has required significant human involvement, and not many companies have been able to put in processes and develop tools to reduce human involvement in knowledge graph development and maintenance. 

    I expect we will make progress in this direction – for example, by extracting the right subset of a bigger knowledge graph for a particular purpose.  Still, the pace of progress will be at a moderate pace (as discussed in the past, one reason is the lack of skilled personnel in Semantic Web and knowledge-enhanced computing topics). A broad variety of tools and applications, including search, chatbots, and knowledge discovery, are waiting to exploit such purpose-built knowledge graphs by using them in conjunction with machine learning and NLP.

    The second expectation is that there will be deeper and broader information extraction from a wider variety of textual as well as multimodal content that exploits semantics, especially knowledge-enhanced machine learning and NLP. First, the necessary data to serve advanced applications will increasingly come from complementary sources and in different modalities (e.g., personalized semantic mHealth approach to asthma management). Second, in addition to extracting entities, relationships of limited types (i.e., types known a priory for which learning solutions can be developed), and sentiments and emotion, we will develop a deeper understanding through more types of subjectivity and semantic or domain-specific extraction. As an example of the latter, for clinical text, we will identify more phenotype-specific relationships, intentions for a clinician or consumer search for health content, severity of disease, etc.

    Finally, while a lot of attention has been given by the semantic web research community to OWL and its variants, what we need is an enhancement for the representation at semantic data level: can everything be represented as triples? We need better ways to represent and compute with provenance, complex representations (e.g., nested statements) and context as needed by real-world applications. One bright spot is the work on singleton property; I expect further progress in near term, followed by broader adoption.


    What would surprise you most to see occur in the world of semantic web/linked data/smart data events/milestones/developments in 2017? Eg. adoption by new communities of users? Why and what impact would that have?


    For brevity, let me just share a recent dialogue which essentially talks about my expectation of developments in semantic, cognitive, and perceptual computing.
  3. In a recently published dialogue/interview, I had a chance to share my thoughts on a wide ranging topics including:

    • semantics, semantic computing and semantic web - key issues and progress
    • teaching/education philosophy and commercialization
    • cognition, intuition, machine learning and man-machine symbiosis


    Find it here: 

  4. In a recently published dialogue/interview, I had a chance to share my thoughts on a wide ranging topics including:

    • semantics, semantic computing and semantic web - key issues and progress
    • teaching/education philosophy and commercialization
    • cognition, intuition, machine learning and man-machine symbiosis


    Find it here: 

  5. In this note, I give a preview of our upcoming article on a very exciting topic:
    Amit Sheth, Pramod Anantharam, Cory Henson, "Semantic, Cognitive, and Perceptual Computing: Advances toward Computing for Human Experience," to appear in IEEE Computer. Preprint: http://arxiv.org/abs/1510.05963


    While the debate about whether AI, robots, or machines will replace humans is raging (among Gates, Hawking, Musk, Thiel, and others), there remains a long tradition of viewpoints that take a progressively more human-centric view of computing. A range of viewpoints from machine-centric to human-centric computing have been put forward by McCarthy (intelligent machines), Weiser (ubiquitous computing), Engelbart (augmenting human intellect), Licklider (man-machine symbiosis), and others as shown in Figure 1. Our focus in this article is on the recent progress taking place in the tradition of human-centric computing, exemplified by what I have termed Computing for Human Experience (CHE).  CHE focuses on serving our needs, empowering us while keeping us in the loop, making us more productive with better and timelier decision-making, and improving and enriching our quality of life. Experiential Computing proposes to utilize the symbiotic relationship between computers and people and exploit their relative strengths of symbolic/logic manipulation and complex pattern recognition, respectively.

    A wide gamut of computing extending from machine centric to human centric 

    CHE utilizes every morphing and transforming Web to connect people and devices, and to deliver and share massive amounts of multimodal and multisensory observations that capture the moments of people’s lives. This includes various situations pertinent to people’s needs and interests along with some of their idiosyncrasies. Data of relevance to people’s lives span the physical, cyber, and social spheres. The physical sphere encompasses reality, as measured by sensors/devices/Internet of Things; the cyber sphere encompasses all shared data and knowledge on the Web, and the social sphere encompasses all the human interactions and conversations. Observation data on the Web may represent events of interest to a population of people (e.g., climate), to a sub-population (e.g., traffic), or to an individual (something very personal, like an asthma attack). These observations contribute toward shaping the human experience, which is defined as the materialization of feelings, beliefs, facts, and ideas that can be acted upon.


    CHE emphasizes a contextual and personalized interpretation of data, which is more readily consumable and actionable for people. Toward this goal, we discuss the computing paradigms of semantic computing, cognitive computing, and an emerging paradigm in this lineage which we term perceptual computing.  We believe that these technologies offer a continuum that reaches toward the goal of making the most of the vast, growing, and diverse data about things that matter to people’s needs, interests, and experiences.  This is achieved through actionable information, both when humans desire something (explicit action) and through ambient understanding (implicit action) of when something may be useful to people’s activities and decision-making. Perceptual computing, in particular, is characterized by its use of interpretation and exploration to actively interact with the surrounding environment in order to collect data of relevance and usefulness for understanding the world around us.




    Semantics, perception, and cognition interact seamlessly. Semantics makes an observation or data meaningful (i.e., provides a definition within the context of a system or knowledge of people), which in turn allows processing through the relation or integration of other observations and data.  While the outcome of cognition results in understanding of our environment, the act of perception results in applying our understanding for exploring our environment. Cognition enables perception to explore the most promising exploration path by providing a comprehensive understanding through the incorporation of background knowledge.


    The full article focuses on defining and characterizing the computing support for semantics, cognition, and perception. It discusses semantic computing, cognitive computing, and perceptual computing to draw distinctions while acknowledging their complementary capabilities to support CHE. We then provide a conceptual overview of the newest of these three paradigmsperceptual computing.

    Here is an excerpt:



    Semantic Computing: Semantic computing encompasses technology for representing concepts and their relations in an integrated semantic network that loosely mimics the interrelation of concepts in the human mind. This conceptual knowledge, represented formally in an ontology, can be used to annotate data and infer new knowledge from interpreted data (e.g., to infer expectations of recognized concepts). Additionally, semantic computing plays a crucial role in dealing with multisensory and multimodal observations, leading to the integration of observations from diverse sources. Figure 2 has semantic computing as a vertical box through which interpretation and exploration are routed. Semantic computing also provides languages for the formal representation of background knowledge.

     Cognitive Computing: In 2002, DARPA, defined cognitive computing as “reason[ing], [the] use [of] represented knowledge, learn[ing] from experience, accumulat[ing] knowledge, explain[ing] itself, accept[ing] direction, be[ing] aware of its own behavior and capabilities as well as respond[ing] in a robust manner to surprises.” Cognitive algorithms interpret data by learning and matching patterns in a way that loosely mimics the process of cognition in the human mind. Cognitive systems learn from their experiences and then get better when performing repeated tasks. Through data mining, pattern recognition, and natural language processing, Cognitive computing is rapidly progressing towards developing technology to support our ability to answer complex questions. Cognitive computing acts as prosthetics for human cognition by analyzing a massive amount of data and being able to answer questions humans may have when making certain decisions.



    Perceptual Computing: The human senses can receive 11 million bits per second and send this information to the brain for processing. Human perception constructs high- level abstractions such that the conscious mind seems to process only 50 bits per second. Perceptual computing needs to play similar role in creating meaningful abstractions from massive amounts of data.

    Perceptual computing will support our ability to ask contextually relevant and personalized questions. Perceptual computing complements semantic computing and cognitive computing by providing machinery to ask the next question or derive a hypothesis based on observations, and to help identify what additional facts and observations can help evaluate or refine the hypothesis, in turn aiding decision-makers in gaining actionable insights.


    Conceptual distinctions between perceptual, cognitive, and semantic computing along with a demonstration of the cyclical process of perceptual computing, which utilizes and refines background knowledge to include contextualization and personalization 

    In the article, we use the personalized digital health example of asthma management to explain the computational contributions of semantic computing, cognitive computing, and perceptual computing over Physical-Cyber-Social data in synthesizing actionable information.  This is done through computational support for the contextual and personalized processing of data into abstractions that move it closer to the level of the human comprehension and decision-making.

    As a parting thought, I am reminded of the term “golden braid” in Douglas Hofstadter’s wonderful book - GödelEscherBachAn Eternal Golden Braid. I believe for the next couple of decades, semantic, cognitive and perceptual computing will each mature in their own right, and work synergistically in an intertwined manner enable intelligent machines to improve human experience.

       
Translate
Translate
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.