Quantcast
Channel: as days pass by
Viewing all articles
Browse latest Browse all 158

Remote Applause

$
0
0

That’s a cool idea, I thought.

So I built Remote Applause, which does exactly that. Give your session a name, and it gives you a page to be the “stage”, and a link to send to everyone in the “audience”. The audience link has “clap” and “laugh” buttons; when anyone presses one, your stage page plays the sound of a laughter track or applause. Quite neat for an afternoon hack, so I thought I’d talk about how it works.

the Remote Applause audience page

Basically, it’s all driven by WebRTC data connections. WebRTC is notoriously difficult to get right, but fortunately PeerJS exists which does most of the heavy lifting.1 It seemed to be abandoned a few years ago, but they’ve picked it up again since, which is good news. Essentially, the way the thing works is as follows:

When you name your session, the “stage” page calculates a unique ID from that name, and registers with that name on PeerJS’s coordination server. The audience page calculates the same ID2, registers itself with a random ID, and opens a PeerJS data connection to the stage page (because it knows what its ID is). PeerJS is just using WebRTC data connections under the covers, but the PeerJS people provide the signalling server, which the main alternative simple-peer doesn’t, and I didn’t want to have to run a signalling server myself because then I’d need server-side hosting for it.

The audience page can then send a “clap” or “laugh” message down that connection whenever the respective button is pressed, and the stage page receives that message and plays the appropriate sound. Well, it’s a fraction more complex than that. The two sounds, clapping and laughing, are constantly playing on a loop but muted. When the stage receives messages, it changes the volume on the sounds. Fortunately, the stage knows how many incoming connections there are, and it knows who the messages are coming in from, so it can scale the volume change appropriately; if most of the audience send a clap, then the stage can jack the clapping volume up to 100%, and if only a few people do then it can still play the clapping but at much lower volume. This largely does away with the need for moderation; a malicious actor who hammers the clap button as often as they can can at the very worst only make the applause track play at full volume, and most of the time they’ll be one in 50 people and so can only make it play at 5% volume or something.

There are a couple of extra wrinkles. The first one is that autoplaying sounds are a no-no, because of all the awful advertising people who misused them to have autoplaying videos as soon as you opened a page; sound can only start playing if it’s driven by a user gesture of some kind. So the stage has an “enable sounds” checkbox; turning that checkbox on counts as the user gesture, so we can start actually playing the sounds but at zero volume, and we also take advantage of that to send a message to all the connected audience pages to tell them it’s enabled… and the audience pages don’t show the buttons until they get that message, which is handy. The second thing is that when the stage receives a clap or laugh from an audience member it rebroadcasts that to all other audience members; this means that each audience page can show a little clap emoji when that happens, so you can see how many other people are clapping as well as hear it over the conference audio. And the third… well, the third is a bit more annoying.

If an audience member closes their page, the stage ought to get told about that somehow. And it does… in Firefox. The PeerJS connection object fires a close event when this happens, so, hooray. In Chrome, though, we never get that event. As far as I can tell it’s a known bug in PeerJS, or possibly in Chrome’s WebRTC implementation; I didn’t manage to track it down further than the PeerJS issues list. So what we also do in the stage is poll the PeerJS connection object for every connection every few seconds with setInterval, because it exposes the underlying WebRTC connection object, and that does indeed have a property dictating its current state. So we check that and if it’s showing disconnected, we treat that the same as the close event. Easily enough solved.

There are more complexities than that, though. WebRTC is pretty goshdarn flaky, in my experience. If the stage runner is using a lot of their bandwidth, then the connections to the stage drop, like, a lot, and need to be reconnected. I suppose it would be possible to quietly gloss over this in the UI and just save stuff up for when the connection comes back, but I didn’t do that, firstly because I hate it when an app pretends it’s working but actually it isn’t, and secondly because of…

Latency. This is the other big problem, and I don’t think it’s one that Remote Applause can fix, because it’s not RA’s problem. You see, the model for this is that I’m streaming myself giving a talk as part of an online conference, right? Now, testing has demonstrated that when doing this on Twitch or YouTube Live or whatever, there’s a delay of anything from 5 to 30 seconds or so in between me saying something and the audience hearing it. Anyone who’s tried interacting with the live chat while streaming will have experienced this. Normally that’s not all that big a problem (except for interacting with the live chat) but it’s definitely a problem for this, because even if Remote Applause is instantaneous (which it pretty much is), when you press the button to applaud, the speaker is 10 seconds further into their talk. So you’ll be applauding the wrong thing. I’m not sure that’s fixable; it’s pretty much an inherent limitation of streaming video. Microsoft reputedly have a low latency streaming video service but most people aren’t using it; maybe Twitch and YouTube will adopt this technology.

Still, it was a fun little project! Nice to have a reason to use PeerJS for something. And it’s hosted on Github Pages because it’s all client side, so it doesn’t cost me anything to run, which is nice and so I can just leave it up even if nobody’s using it. And I quite like the pictures, too; the stage page shows a view of an audience from the stage (specifically, the old Met in New York), and the audience page shows a speaker on stage (specifically, Rika Jansen (page in Dutch), a Dutch singer, mostly because I liked the picture and she looks cool).

  1. but it requires javascript! Aren’t you always going on about that, Stuart? Well, yes. However, it’s flat-out not possible to do real-time two-way communication sanely without it, so I’m OK with requiring JS in this particular case. For your restaurant menu, no.
  2. using a quick JS version of Java’s hashCode function, because PeerJS has requirements on ID strings that exclude some of the characters in base64 so I couldn’t use window.btoa(), I didn’t want (or need) a whole hash library, and the Web Crypto API is complex

Viewing all articles
Browse latest Browse all 158

Trending Articles