For years now, the WebRTC developers and users community has been asking for support for WebRTC on more platforms and Operating Systems.  While native app developers can always use an existing stack, namely or OpenWebRTC, web app developers have to either limit their service offering to certain platforms or use browser plugins.           

One important browser engine that does currently not support WebRTC is WebKit. WebKit is the engine behind a number of web browsers, notably Safari and Web (Epiphany), and also powers the iOS WebView as well as numerous other applications.

The goal of this site is to inform the WebRTC community of an ongoing project to bring WebRTC to WebKit, gather feedback, and provide a forum for discussions and contributions.

Let's dig a little bit deeper on the technical side of things. Most browsers have a similar architecture, with an application level (the browser itself) on top of a reusable engine to handle web pages (DOM / CSS / JS). In our case, this reusable engine is WebKit.

There are quite a few things to do in different underlying parts of WebKit to support WebRTC:

- the HTML <video> element (HTMLMediaElement) needs to be extended to support MediaStream as a possible source.

- the WebRTC APIs (as defined by the W3C specifications) must be implemented in WebCore. This is a key part of this project, as it is directly reusable by other WebKit ports without modification to add support for WebRTC.

- the bindings part needs to be able to translate those interfaces in JS for the browser to use. Not for the faint of heart when you want to support both callback-based and promise-based versions of the APIs.

Some work needs to be done at the browser level as well:

- the browser UI part needs to implement all the security prompts and other pop ups we have learned to love and cherish when trying to share one's camera, microphone, or screen.

- Hardware access needs to be done at browser level to be able to share access to e.g. the camera to several tabs at the same time, or to achieve browser-wide echo cancellation.

Some of the above work had been done in the past by several contributors including Google, Nokia, Apple, Ericsson, Igalia, Samsung, at different points in time. However, with the corresponding W3C and IETF specifications still being in flux, the work needs to be kept synchronized. 

Once all this is done, you have JavaScript APIs available from your browser that does ... absolutely nothing. You still need to have a WebRTC stack, implementing all the necessary IETF specifications hooked up to the platform-independent API implementation for anything to work. In WebCore, this is called the platform layer or Back-End. This is where OpenWebRTC (first) and (maybe later) come in.

The focus of this effort for the time being is to:

1. Bring WebCore's implementation of WebRTC APIs (and corresponding UI elements) up to date with the specifications and keep it there.

2. Implement the glue layer needed to use OpenWebRTC as the core of the corresponding platform implementation

3. Implement the browser level changes in the WebKitGTK+ port, on Linux.

And we’ve already gotten started – here is a list of recent WebKit bugs and patches:

Eventually, more ports (browsers) and more WebRTC stacks should be supported as the groundwork will be done and included in WebKit source tree. 

It will take a lot of effort to test, polish and bring the whole project to completion, and of course, to convince your favorite browser vendor to adopt this WebRTC-enabled version of WebKit once it's done, so please contact us if you want to contribute in any way.