Perspective: New Horizons: Ushering in a Transformative New Technology Era
At the Concord Consortium, we’re always on the cutting edge of STEM educational technology. Sometimes that cutting edge feels razor sharp. This is one of those times. We are very close to seeing current capabilities and long-term potential converge in ways that will radically open up the technology landscape and accelerate the development of an immense range of activities.
Educational technology design and development is exciting because it never stops. In fact, it is always hurtling forward. To understand where we should focus, we must look far ahead. While many ideas on the horizon may seem futuristic, they are often very near, with significant advances within as little as five to ten years. To unlock the full potential of these opportunities for STEM teaching and learning, however, we must understand them well and anticipate them early.
In order for educational technology to be useful, learners must be able to communicate their ideas and intent to it. The modern computer era has offered only a limited set of input methods—until recently, the mouse represented the only real innovation in input in almost 40 years. This drought has finally begun to abate, with touchscreens popping up everywhere. Learning is now possible for a cohort of learners too young to navigate traditional keyboards and mice. Though the wide use of multi-touch technology for STEM learning is still in its infancy, the Concord Consortium’s cutting-edge work with large-format multi-touch screens for museum exhibits represents one example of a new design paradigm. As multi-touch tables and walls become readily available, new modes of collaboration and highly interactive environments will blossom.
But touchscreens represent only one of the many ways new input can transform teaching and learning. Learning happens through animated conversation, verbal exchanges, and natural gestures, and is mediated by emotion. All of these will soon be available for input. Speech technology is proceeding at a breathtaking pace, as anyone who has interacted with Skype and Google’s translation tools or the marvelous Amazon Echo can attest. Google Docs are now fully voice compatible. Apple is integrating Siri into its newest version of OS X. Natural speech input is here to stay. We at the Concord Consortium are actively exploring the broad potential spoken language technologies offer for educational research and learning.
Gestures are similarly essential to communication, conveying information beyond the spoken word and providing cognitive support. Gesture sensing and response technology are rising quickly, from Leap Motion’s consumer device to Google’s tiny, impressive, radar-powered Project Soli. We are exploring this future through active research collaborations into gesture-based control of models and simulations.
The list continues. Conversational, chat-based input examples are bursting onto the scene—watch Facebook’s M, Google, and a raft of startups. Facial recognition technology is already mainstream. And headband brainwave sensors from companies such as Emotiv sense affective qualities such as focus, engagement, and excitement. These technologies have the potential to make learning personal in radical ways and tune it to optimal conditions, turning the vague “teachable moment” into a research reality.
Many of these possibilities owe a huge debt to a revolution that has been brewing for almost as long as computers themselves. The beginnings of artificial intelligence (AI) in the late 1950s whipsawed from stunning advances to deep cooldowns that made many write off the field entirely. Google brought “deep learning”—and much of the AI community—back from a deep sleep in 2012, as algorithms dove into the YouTube universe and independently identified an image—a cat, of course! With the starting gun officially fired, advances shot out of the gate. AI applications can now categorize real-world objects in real time, surpass humans at large-scale image recognition, learn to read unknown alphabets, and beat humans at video games. Now they have roundly beaten a world champion at the game of Go, a feat thought only months earlier to lie still a full decade away.
Educational technology has only begun to imagine the possibilities of these advances, but they are certainly manifold. We are currently exploring 1) the application of machine learning to provide real-time analysis and feedback on student argumentation, 2) the use of deep learning and other techniques to provide guidance to teachers and students playing genetics games, and 3) the use of data-mining techniques to analyze learner-generated data and spur actions that improve teaching and learning.
Internet of Things
One result of the decades-long reign of Moore’s Law is the broad “Internet of Things” (IoT). The smartphone revolution has quietly ushered in fleets of tiny, low-cost, high-powered computing devices. Now their astonishing ramifications are becoming clear. With entire systems on a single chip, devices can be programmed with ease and placed into almost anything. Devices the size of a postage stamp monitor temperature and airflow in every room of a remote manufacturing facility and track precise locations and engine use across full vehicle fleets.
The second wave of this revolution is already here: drones, intelligent toys, and tiny tracking devices for cars, keys, and even kids. But the educational potential of these devices has yet to be fully explored. Some projects have rightfully made news—the wonderful (now amazingly $5) Raspberry Pi comes to mind— but the time is ripe to recast the IoT for education more broadly. If sensors can monitor assembly-line conditions, they can also turn a science laboratory into a data-streaming environment or bring remote ecosystem monitoring to children’s fingertips. The Concord Consortium’s vision introduced the probeware revolution decades ago. Today, IoT technology offers an equally revolutionary set of opportunities for teaching and learning.
Having survived a full cycle of bust-and-boom expectations, virtual reality is now back, and this time it is delivering on all its promises. From the breathtaking HTC Vive and consumer-ready Oculus Rift to the barebones, yet amazing, Google Cardboard, immersion into new worlds is coming to the masses in an entirely new medium of expression and experience. Full implementation of this brave new world is yet to come. Movies and games will arrive first. The New York Times is already experimenting with its potential for journalism. But the opportunities for education are still wide open. What will happen when we transport learners inside a chemical reaction or drop them on an alien world to collect samples as scientists? Google Expeditions shows one of the current great examples—allowing a teacher to “drive” a classroom of students to gape at towering Mayan ruins, then teleport them atop the Great Wall of China, or take them on a global geology tour from Ayers Rock to Arches National Park.
Virtual reality is powerful stuff, creating “presence” that tricks our brains into thinking we’re truly somewhere else. Full, persistent virtual reality worlds will offer radically new inquiry science opportunities, with experiments playing out not across minutes, but over months. And its sister technology, augmented reality, will layer notes and real-time visualizations onto reality, annotating our real-world views of everything from pond ecosystems to intricate engineering processes.
As always with such revolutions, we don’t know exactly where this will all lead. What is clear is that tremendous learning transformations lie on the near horizon. We invite you to join us as we explore the possibilities.
Chad Dorsey (email@example.com) is President of the Concord Consortium.