Friday, March 6, 2015

Internet of Things and Data Dilemma

The coming billions of Internet of Things devices will simply generate too
much data
to be analysed in traditional ways. Instead of the usual one-to-one predefined IP legacy topology, only a publish/subscribe model allows the big data servers to be selective and adaptive in the choice of data to operate upon, and is thus smarter over time.


Even more importantly, the big data analyzers will not even know what data streams
would be useful until they discover the data. Information neighborhoods created through
data stream affinities will present opportunities for selecting and combining small data
flows from many different kinds of end devices, not all of which are even part of a specific application


This allows IoT applications to become smarter and smarter over time, as evermore end devices are installed. Whatever initial purpose these end devices
serve, they may also unexpectedly and unpredictably benefit other applications that
discover their data outputs and find them useful.
When initially installed, specific appliances, sensors, and actuators may serve a
particular application. But over time, new end devices may be deployed by the same or
other organizations. Data streams from these new devices may also be recognized by
“affinities” of place, time, or correlation to be incorporated into the original application’s information “neighbourhood.”

No comments:

Post a Comment