We are still living in a semi-automatic world. Even when promised seamless operation, there’s usually an app between us and the goal. But we are definitely edging just a bit closer to the true promise of the minimally defined and maximally marketed “Internet of Things,” whereupon connected objects are taught to interact properly, and we can skip the app that prompts their exchanging information for our benefit.
|Cloud-based APIs can inform and influence complex behaviors in installed systems. Pictured is a flagship retail environment in which AV&C developed custom generative and playback software, controlled by a rules-based algorithm and content management system, to create infinite content variation for the showpiece canvasses.|
An incredibly comprehensive educational presentation program will be on offer at the IoT Pavilion, and I’m not just saying that because I happened to help organize it. Though I am proud to say we’ve assembled quite the roster of industry luminaries and dazzling minds from outside of our technical sphere to help prove that AV is more than ready for this “things” thing. (With the caveat that we figure out the security and encryption bit, of course.)
Across all three days of the show, June 8-10, the IoT Pavilion (Booth N2871) will present a carefully arranged program of sessions that look at this progress from every angle. The presentations are 20 minutes each, designed to get to the point and provide insight into the practical opportunities and techniques in and around IoT.
We’re looking at interactivity, interfaces, biometrics, analytics, wearable technology, and the overall digitization of the built environment. But we’re not looking at these factors merely from the “Wow, those are cool things” perspective. We have presenters who are using all of the above in top-level applications right here in our sector of the technology universe.
Sure, this industry was built on sequential macros, but we’ve always known how wonderful it would be if a true communication of status and information exchange occurred between pieces of hardware. We’d have sprinklers that know it’s raining and doors that unlock as we approach. Technological action would be dictated by data interpretation. We’d have true “if/then” scenarios (and maybe someday “I think, therefore I am” scenarios).
So in the “internet of everything” version of systems integration, all we have to do is make the leap to APIs, right? Yeah, that does seem to give a lot of people pause. But APIs are simply another way for AV people to provide a translation service for things that can’t natively communicate to each other, even when they’re connected via ethernet, Wi-Fi, Bluetooth, NFC, etc.
Demystifying the API and redefining it as an extension of the system design sensibilities already possessed by AV integrators is one objective of the IoT Pavilion presentation to be given by David Bianciardi of AV&C. Building from the core AV principle of programming connected devices to take predetermined steps for a set outcome, Bianciardi will demonstrate how APIs broaden the possibilities for systems integration.
“With APIs, you can design aggregate behaviors that combine local sensing and logic with high-power, cloud-based services,” he said. “Increasingly, we’ll allow devices to talk to each other, and figure out the best way to achieve the user’s desired outcome. Defining system behavior like, ‘I want lights on when it’s dark outside, shades drawn when sun is on south windows, etc.’ will eventually not even be explicit logic, but a group of devices learning how to combine forces and reach out to data sources to most effectively serve the needs of a user. Like we see on the web, APIs at first are a way of talking between devices and platforms, but that connectivity will lead to us letting machine learning take on some of the decision making in the system—allowing integrators to deliver more complex systems that operate simply and are resilient in the face of new inputs or barriers.”
To illustrate further the potential for automation of digital interaction with the built environment and people within it, Bianciardi will present another session with Neil Redding, an interdisciplinary technologist who most recently headed up creative digital projects for Gensler. The duo will present a preview of how the “digitally personalized physical world” will take shape in retail and other brand-centric, consumer-oriented spaces in their “Future Scenarios” presentation on the second day of the show.
Redding’s take on the merging of digital and physical worlds is that it’s already happened— not quite in the augmented reality (AR) sense just yet, but in expectation of always-on, always interactive, and updated sensibility prompted by mobile devices and health-tracker bracelets. “The digital experience has become contextual; it’s delivered to us based on where we are and what we’re doing,” Redding said. “We’re increasingly using computers to use the world, ordering Amazon, Starbucks, Uber, food delivery, and all of the personalization of these services is becoming automatic, taken on more and more by software.”
From this standpoint, IoT isn’t a prospect, it’s a reality in today’s post-digital world. What we used to label “digital” is now so pervasive that it’s just normal, and the design of physical spaces is increasingly dependent on transparent technological integration. “We already expect the digital experience to be accessible everywhere, and we make choices on where we spend our time based on where we have access to great Wi-Fi or signal,” Redding elaborated. “We expect digital to connect with physical and vice versa.”
This is where the “experience design” we’ve heard so much about comes in. Except in the IoT phase, it’s not just about dazzling people with great audio and video, it’s about personal device integration. Environments need to recognize phones or wearable devices, automatically connecting them with a space. This paves the way for the imminent arrival of digital capabilities being layered over the physical by the AR devices we’ll soon be employing on a regular basis.
But it doesn’t have to be scary. Especially if we learn now how we can more cohesively design physical spaces and their corresponding digital experiences, so the effect is possibly more pleasant than burying our attention in our phones all the time.
How to design for this new experience is another key element of sessions at the Info- Comm IoT Pavilion, and the schedule includes viewpoints from numerous vertical markets and applications. Of course, tying all of these technologies together on the back end is one thing, but clearly, the interface is all, and fortunately there will be a session on just that subject, presented by Paul Chavez of Harman Professional Solutions. Leaving apps behind and looking ahead to voice recognition and gesture control, Chavez will present a number of use cases and examine how to match the human element with the right interface for the expanding number of internet-enabled things in our lives.
“How do you know what interface is the best interface— it depends on scenario,” he observed, noting that our overreliance on mobile devices and apps may be convenient, but it puts something in between us and what we need to accomplish. Here he cites the work of UX design superstar Golden Krishna: “He’s delved into this. When you get home from work, you don’t want to bring out your phone and open the door, you just want the door to open.” And with the suite of wireless protocols that the IoT leverage, we truly can develop an intelligent wireless interface, he added, “one that knows where you are, and knows what you want to do in a given situation.”
Put in the context of systems integration, automation, and interface design, IoT definitely looks like AV. So visit the IoT Pavilion and optimize your understanding of how to design and build for these new modes of connection.
Kirsten Nelson is editor-at-large of SCN. Follow her on Twitter @kirstennelson.