A demo for the Sensor APIs on the web

Katerina Skroumpelou
8 min readOct 4, 2022
“woman using smartphone to control a video game on a laptop — painting by David Hockney” — DALL-E

Turn your phone (which has access to sensors) into a remote control (a Wii Remote) for an application that’s running on your computer (which — usually — does not have motion sensors) — using the web

Intro — When and why

I wanted to experiment with some web technologies and web APIs this summer (as if I was not stressed enough), so I decided to revisit (and rebuild for the better) an old idea of mine. The concept of turning my phone into a gamepad (or, more accurately, a Wii Remote of sorts), and using it to control some graphics on a web application running on my computer. I know that this idea is not super innovative, and I’m sure it has been executed before, maybe even in a better way. But still, I wanted to play around with sensors, data streams, and graphics.

I had toyed with that idea back in 2015, when I first built a “move your phone and watch a cube follow that motion on the screen, on a different machine”. I explain how I did it in this (very old, please don’t judge too much) blog post of mine, accompanied by a video. I revisited the idea in 2018, where I demoed a version of this idea, in Vienna, on May 18th, 2018, at the WeAreDevelopers conference. This time, I had decided to use solely web technologies and APIs. Here’s the YouTube link of my talk!

Now, in 2022, I decided to revisit this idea. I rebuilt the applications from scratch, and added a number of features to make them more user friendly, and generally improved the whole thing. I’ve also added documentation and instructions on how to play the game, and how to build it, too. All you need is Chrome on your phone (for the “gamepad”), and another machine with a web browser (computer, phone, tablet, whatever — the larger the screen the better) to display the actual game to be controlled.

Purpose and achievements

The purpose of this game|experiment|demo is to showcase the usage of the phone sensors through the web platform. Then, to suggest some ways these sensor data can be used to create interactive experiences on the web. And, most importantly, to provide a visualization of the sensor readings!

Sensors are already used in a wide variety of applications. For native applications, of course, sensors are a very common feature, and most applications use some kind of sensor in some way (whether it’s a motion sensor or an environmental sensor, or a combination of sensor data). We don’t see that as often on web applications, however. I don’t have any numbers to back up my claim, it’s just from pure observation. I may be wrong. It does not really matter, to be honest. :P

Sensor Data on the Web

Sensor data is exposed to the Open Web Platform. There are a number of Sensor APIs for the web that access these data. In this example we are mostly accessing sensor data through the Generic Sensor API. I find it necessary here to include the abstract of the Generic Sensor API specification:

“This specification defines a framework for exposing sensor data to the Open Web Platform in a consistent way. It does so by defining a blueprint for writing specifications of concrete sensors along with an abstract Sensor interface that can be extended to accommodate different sensor types.”

Types of Sensors

The following section copies parts of the Generic Sensor API specification, taken from here: https://www.w3.org/TR/generic-sensor/. You will see me linking this page a lot in this article.

High level — Low level

There are what we call “low level” sensors and “high level” sensors. The low level sensors are sensors which “are characterized by their implementation”. That means that they provide readings of an actual sensor chip, for example the Gyroscope. The high level sensors are named after their readings, regardless of the implementation. For example, they can be the result of algorithms applied to low-level sensors, like, for instance, the pedometer.

Available sensors

  • Environmental

Sensors that measure physical properties of the environment they are in, and these are the Ambient Light Sensor, the Proximity Sensor and the Magnetometer

  • Inertial

Sensors based on inertia and relevant measuring concepts. Such sensors are the Accelerometer and the Gyroscope.

This group of sensors provide measurements that are ‘fused together’ by fusion algorithms. The fusion algorithms might require data from one or multiple sources.

How the Generic Sensor API definition works

The Generic Sensor API provides a generic sensor interface which each sensor extends. The interface accepts some inputs and returns some outputs. It also exposes methods to start and stop the sensor, as well as exposes event handlers for the sensors. It defines a lifecycle as well. You can read all the details here.

Compatibility

Here you can see the Sensor interface browser availability.

In Chrome, for the sensors to work, you may have to enable some flags, to give access to the browser. These flags, on Google Chrome, are the following:

  • chrome://flags/#enable-generic-sensor
  • chrome://flags/#enable-generic-sensor-extra-classes

This is an interesting read that explains the implementation of the sensors in Chromium.

Safeguards

In any case, you can check the compatibility by calling the sensor. If you’re getting back readings, you are OK. If you are NOT getting back readings, then there may be a number of issues:

  1. Your device has the sensor but your browser does not support the sensor
  2. Your browser supports the sensor but your device does not have the sensor
  3. Your device has the sensor, your browser supports the sensor, but the user did not give permission

The solutions to these issues are the following:

  1. Use try — catch when instantiating the sensor object
  2. Check for the sensor existence in `window`
  3. Use the Permissions API to ask for the user’s permission

Threats

Of course, using device sensors, and gaining access to device sensors, has many security and privacy implications. The more information and data you have about a device, the easier that device becomes trackable, traceable, exploitable. Some common threats are listed here. Briefly:

  • Location Tracking
  • Keystroke Monitoring
  • Device Fingerprinting
  • User Identifying
  • Eavesdropping (yes, with the gyroscope)

Safeguards

Again, copying from the docs (https://www.w3.org/TR/generic-sensor/#mitigation-strategies):

  • Use https
  • Ask for user permission using the Permissions API
  • The sensors only work on visible and focused elements
  • Sampling frequency and accuracy control
  • Always inform the user of a sensor API usage

Why is this important? Why do I need it?

This is the million dollar question. I find the use of sensors on the web, apart from exciting, extremely helpful. It opens up lots of possibilities, and brings the web experience even closer to the native experience. Sensors are tied to our mobile phones, and the way we think about our mobile apps. From device orientation, to pedometers, to activity tracking, geolocation, brightness adjustment, and the list goes on. All these features are available to the web, and the way we build our apps and our PWAs can be even more enhanced when we provide the user with the full native experience that the sensor readings offer.

Possible applications

This is just some brainstorming, I cannot wait to hear your ideas:

  • Adjust colors & theme according to light
  • Create games
  • Native-like interactions and events
  • Experiment with potential applications in navigation
  • Build your own fusion sensors

The game I created

I created a simple game, that takes the readings of a number of different sensors from a mobile device, and sends these data (through a websocket server) to an application running on another machine (preferably a computer of a device with a larger screen), so that it can control some animations.

This is definitely not an everyday usage example, but it certainly visualizes the readings these sensors expose to the web platform. So, though playing this game, the user can see in a visual way the data they can access if they start using the Generic Sensor API on the web.

The game parts

The game consists of two parts. The “gamepad” or the “Wii Remote”, and the “Dashboard of Activities” or “Playground”. You can access the application here:

https://ws-pakotinia.web.app/

Or here:

https://ws-pakotinia.firebaseapp.com/

If you are on your phone, click the “Gamepad” button, if you’re on your computer, click the “Playground” button.

Please, read the instructions before playing. They will clear up any questions you may have.

Gamepad

The “Gamepad” is your mobile device. It works best if you use Google Chrome, and you enable the flags, as instructed in the intro screen. Once you “log in” the game, you can start moving your phone, to see the visualization of the movements on the dashboard. Read the instructions if something is unclear.

Dashboard of activities

The “Dashboard of Activities” or the “Playground” is a collection of 6 different games you can play. Read the instructions carefully, please.

Challenges

Types

The main challenge I faced while building this project, was that I wanted to use TypeScript, so I needed all the types for the sensors I would be using. Kenneth has created the https://@types/w3c-generic-sensor package. This lacked the types for the AmbientLightSensor, so I added them :). For some reason, I am so proud of that PR. I’ve written more complicated code in my life, I’m sure, but that specific one, just the thought that it’s used by so many people, puts a grin on my face.

Polyfills

Of course, Kenneth has also created the motion-sensors-polyfill, as well. So that was a life saver, too.

Limitations

Of course, there are limitations, both to building this game, and to using these sensors, and to integrating sensors to your web application. I’m listing just a few that I thought of (or that I ran into):

  • not all devices have all sensors
  • not all devices have the same frequency in obtaining data
  • not all devices sensors have the same sensitivity
  • not all browsers use the same coordinate system
  • even if a device has a sensor, the browser might not support reading it (eg. light)!

Accessibility

Make sure, if you are going to integrate sensors to your web applications, that you provide alternative ways of interactions, and consider the accessibility implications and limitations.

Where can I find your game and play it?

Here is the game:

https://ws-pakotinia.web.app/

Or here:

https://ws-pakotinia.firebaseapp.com/

And here are the docs/instructions: https://mandarini.github.io/sensors-demo/

Here is the repository for the code:

https://github.com/mandarini/sensors-demo

Further reading and references

This is the most important section of this article, since this article does not go deep into explaining sensors. It just explains what I did, and it points you to the detailed explanation of how I created this demo.

So, here go some links for you:

Now what?

I’d love to speak more about this, so, yes, reach out to me!

I also want to hear your feedback, and your suggestions for more implementations of the Sensors in our web applications!

If I forgot to give credit to anyone, please let me know and I’ll update this with the missing credit.

And, follow me on twitter, and visit my website: https://psyber.city

--

--