A couple of decades ago if you bought something of any reasonable complexity, odds are it came with a call center number you had to call in case something went wrong. Perhaps like the airline industry, economic pressures on contact centers shifted their modus operandi from customer delight to cost reduction. Unsurprisingly this has not done well for contact center public sentiment. Its no wonder the web came along to augment and replace much of this experience – but no where near all it. Today, WebRTC offers a unique opportunity for contact centers to combine their two primary means of customer interaction – the web and phone calls – and entirely change the dynamic to the benefit of both sides.
To delve into what this looks like, we invited Rob Welbourn to walk us through a typical WebRTC-enabled contact center infrastructure. Rob has been working on the intersection of telephony and web technologies for more than 8 years, starting at Covergence. Rob continued this work which eventually coalesced into deep enterprise and contact center WebRTC expertise at Acme Packet, Oracle, Cafe X, and now as an consultant for hire.
Please see Rob’s great technology brief on WebRTC architectures in the Contact Center below.
{“intro-by”: “chad hart“}
Introduction
If ever there was an area where WebRTC is expected to have a major impact, it is surely the contact center. By now most readers of this blog have seen the Amazon Kindle Fire commercials, featuring the get-help-now Mayday button and Amy, the annoyingly perky call center agent:
https://www.youtube.com/watch?v=0dU3T655xV0
Those in the industry know that Mayday’s voice and video capability use WebRTC, as detailed by Chad and confirmed by Google WebRTC development lead Justin Uberti. When combined with screen sharing, annotation and co-browsing, this makes for a compelling package. Executives in charge of call centers have taken notice, and are looking to their technology suppliers to spice up their call centers in the same way.
Indeed, the contact center is a very instructive example of how WebRTC can be used to enhance a well-established, existing system. For those who doubt that the technology isn’t mature enough for widespread deployment, I’ll let you into a dirty little secret: WebRTC on the consumer side of the call center isn’t happening in web browsers, it’s happening in mobile apps. I’ll say more about this later.
What a Contact Center looks like
Before we examine how we can turbocharge a contact center with WebRTC, let’s take a look at the main component parts, and some of the pain points that both customers and call center staff encounter in their daily lives.
(Disclaimer: This sketch is a simplified caricature of a call center, drawn from the author’s experience with a number of different systems. The same is true for the descriptions of WebRTC gateways in the following sections, which should be viewed as idealized and not a description of any one vendor’s offerings.)
The web-to-call correlation problem
Let’s imagine that we’re a consumer, calling our auto insurance company. Perhaps we’ve been to their website, or maybe we’re using their shiny new mobile app on our smartphone. Either way, we’ve logged into the insurer’s web portal, to get an update on an insurance claim, update our coverage, or whatever. (And yes, even if we’re using a mobile app, we’re most likely still communicating with a web server. It’s only the presentation layer that’s different.)
Now suppose that we actually want to talk to a human being who can help us. If we’re lucky, the web site will provide a phone number in an easy-to-find place, or maybe our mobile app will automatically bring up the phone’s dialer to make the call. However, at this point, all of our contextual information, such as our identity and the web page we were on, gets lost.
The main problem here is that it is not easy to correlate the web session with the phone call. The PSTN provides no way of attaching a context identifier from a web session to a phone call, leaving the caller ID or dialed number as the only clues in the call signaling. That leaves us with the following possibilities:
- Use the caller ID. This is ambiguous at best, in that a phone number doesn’t definitively identify a person, and mobile device APIs in any case forbid apps from harvesting a device’s phone number, so it can’t be readily passed into the contact center by the app.
- Use the called number. Some contact centers use the concept of the steering pool, where a phone number from a pool is used to temporarily identify a particular session, which could potentially be used by a mobile app. However, the redial list is the enemy of this idea; since the number is temporarily allocated to a session, you wouldn’t want a customer mistakenly thinking they could use the same number to call back later.
- Have the contact center call the customer back when it’s their turn, and an agent is about to become available. This is in fact a viable approach, but complex to implement, largely for reasons of not tying up an agent while an attempt is made to reach the customer and verify they still want the call.
- Use WebRTC for in-app, contextual communications.
Customer-side interaction
But let’s continue with the premise that the customer has made a regular phone call to the contact center. From the diagram above, we can see that the first entity the call hits is the ingress gateway (if via TDM) or Session Border Controller (if via a SIP trunk). This will most likely route the call directly to an Interactive Voice Response (IVR) system, to give the caller an opportunity to perform self-service actions, such as looking up their account balance. Depending on the vendor, the ingress gateway or SBC may itself take part in the interaction, by hosting a VoiceXML browser, as is the case with Cisco products; or else the IVR may be an application running on a SIP-connected media server platform.
Whatever the specific IVR architecture, it will certainly connect to the same customer database used by the web portal, but using DTMF to input an account number and PIN, rather than a username and password. If the customer is lucky, they have managed to find an account statement that tells them what their account number is; if not, the conversation with agent is going to start by having them spell their name, give the last four digits of their Tax ID, and so on. Not only that, but if a PIN is used, it is doubtless the same one used for their bank card, garage door opener and everything else, which hardly promotes security. This whole process is time-consuming for both customer and agent, error-prone, and generally frustrating.
At this point the IVR has determined who the caller is, and why they are calling – “Press 1 for auto claims, 2 for household claims…”; the call now needs to be held in a queue, awaiting a suitably qualified agent. The job of managing the pool of agents with their various skills, and the queues of incoming calls, is the job of the Automated Call Distributor (ACD). An ACD typically has a well-defined but proprietary interface or protocol by which it interacts with an IVR. The IVR will submit various data items to the ACD, notably the caller ID, called number, customer identity and required skill group. The ACD may then itself interrogate the customer database, perhaps to determine whether this is a customer who gets priority service, or whether they have a delinquent account and need to be handled specially, and so on, so that the call can be added to the appropriate queue. The ACD may also be able to provide the IVR with the estimated wait time for an agent, for feedback to the caller.
Agent-side interaction
Let’s turn for a moment to the agent’s side of the contact center. An agent will invariably have a phone (whether a physical device or a soft client), an interface to the ACD (possibly a custom “thick client”, but increasingly a web-based one in modern contact centers) and a view into the customer database. For business-to-business contact centers, the agent may also be connected to a CRM system: Salesforce.com, Siebel, Oracle CRM, Microsoft Dynamics, and so on.
For the purposes of our discussion, the agent’s phone is connected to a PBX, the PBX will provide call status information to the ACD using a standard telephony interface such as JTAPI, and the ACD will in turn use the same interface to direct incoming calls to agents. This would typically be the case where an organization has a Cisco or Avaya PBX, for example, and the use of standard JTAPI allows for the deployment of a multi-vendor call center. Other vendors, notably Genesys, have taken the approach of building their call center software using a SIP proxy as a key component, and the agents register their phones directly with the ACD rather than with a PBX.
The agent will log into the ACD at the beginning of their shift, signaling that they are available. Call handling is then directed by the ACD, and when a call is passed to an agent, the ACD pushes down the customer ID to the agent’s desktop, which is then used to automatically do a “screen pop” of the customer’s account details from the customer database or CRM system.
Call handling in a contact center is thus a complex orchestration between an IVR, ACD, PBX and various pieces of enterprise software, usually requiring the writing of custom scripts and plugins to make everything work together Not only this, but contact centers also make use of analytics software, call recording systems, and so on.
The caller experience
Let’s return to our caller, parked on the IVR and being played insipid on-hold music. When the call eventually reaches the head of the queue, the ACD will instruct the IVR to transfer the call to a specific agent. The agent gets the screen pop, asks the caller to verify their identity, and then begins the process of asking why they called.
To summarize, the contact center experience typically involves:
- Loss of contextual information from an existing web or mobile app session.
- Navigating IVR Hell.
- Waiting on hold.
- Re-establishing identity and context.
- A voice-only experience with a faceless representative, and lack of collaboration tools.
It’s no wonder this is judged a poor experience, for customers and contact center agents alike.
Adding WebRTC to the Contact Center
WebRTC is part of what is called in the contact center business, the “omnichannel experience”, in which multiple modalities of communication between a customer and the contact center all work together seamlessly. An interaction may start on social media, be escalated to chat, from there to voice and video, and possibly be accompanied by screen sharing and co-browsing. But how is this accomplished?
The key thing to hold in mind is that voice and video are integrated into the contact center app, and that context is at all times preserved. As a customer, you have already established your identity with the contact center’s web portal; there’s no need to have the PSTN strip that away when you want to talk to a human being. And when you do get put through to an agent, why shouldn’t they be able to view the same web page that you do? (Subject to permission, of course.)
To do this, we need the following components (shown colored purple in the above diagram):
- A back-end to the web portal that is capable of acting as a pseudo-IVR. As far as the ACD is concerned, it’s getting a regular incoming call, which has to be queued and transferred to an agent as usual. The fact that this is a WebRTC call and not from the PSTN is totally transparent to the ACD.
- A co-browsing server – this acts as a rendezvous point between the customer and the agent for a particular co-browsing session, where changes to the customer’s web page are published over a secure WebSockets (WSS) connection, and the agent subscribes to those changes. The actual details of how this works are proprietary and vary between vendors; however, the DOM Mutation Observer API is generally at the heart of the toolkit used. When the agent wishes to navigate on behalf of the customer, mouse-click events are sent back over the WSS connection from the agent and injected into the customer’s web page using a JavaScript or jQuery simulated mouse click event. Annotation works similarly, with a mousedown event being passed over the WSS connection and used to paint on an HTML canvas element overlaying the customer’s web page.
- A WebRTC-to-SIP signaling gateway (as webrtcHacks has covered here).
- A media gateway, which transforms the SRTP used by WebRTC to the unencrypted RTP used by most enterprise telephony systems, and vice-versa. This element may also carry out H.264 to VP8 video transcoding and audio codec transcoding if required.
The signaling and media gateways are common components for vendors selling WebRTC add-ons for legacy SIP-based systems, and are functionally equivalent in the network to a Session Border Controller. Indeed, several such products are based on SBCs, or a combination of an SBC for the media and a SIP application server for the signaling gateway. On the other hand, the pseudo-IVR and co-browse servers are rather more specialized elements, designed for contact center applications.
The work of this array of network elements is coordinated by the web portal, using their APIs and supporting SDKs. The sequence diagrams in the next section show how the web portal and the ACD between them orchestrate a WebRTC call from its creation to being handed off as a SIP call to an agent, and how it is correlated with a co-browsing session.
Finally, it should be noted that a reverse HTTP proxy is generally required to protect the web servers in this arrangement, which reside within the inner firewall. The media gateway would normally be placed within the DMZ. The use of multiplexing to allow the media streams of multiple calls to use a single RTP port is a particularly noteworthy feature of WebRTC, which is deserving of appreciation by those whose job it is to manage firewalls.
Call Flows
In the diagrams that follow, purple lines indicate web-based interactions, often based on REST APIs. Some interactions may use WebSockets because of their asynchronous, bidirectional nature, which is particularly useful for call signaling and event handling.
Preparing for a call
Let us start at the point where the customer has already been authenticated by the web portal, and has been perusing their account details. Seeing the big friendly ‘Get Help’ button on their mobile app (remember, this is a mobile-first deployment), they decide they want to talk to a human. Inevitably, an agent is never just sitting around waiting for the call, so there is work to be done to make this happen.
The first step in preparing for the call is for the web portal code to allocate a SIP identity for the caller, in other words, the ‘From’ URI or the caller id. This could be any arbitrary string or number, but it should be unique, since we’re also going to use it to identify the co-browse session. Next, the portal requests the WebRTC signaling gateway to authorize a session for this particular URI, because, well, you don’t want people hacking into your PBX and committing toll fraud using WebRTC. The signaling gateway obliges, and passes back to the web portal a one-time authorization token. Armed with the token, the portal instructs the client app (or browser) to prepare for the WebRTC call. It provides the token, the From URI, the location of a STUN server and information on how to contact the signaling gateway.
While the client is being readied, the portal makes a web services call to the ACD to see when an agent is expected to become available, given the customer’s identity and the nature of their inquiry. (The nature of the inquiry will be determined by what page of the website or app they were on when they pressed the ‘Get Help’ button.) Assuming an agent is not available at that very moment, the portal passes back the estimated wait time to be displayed by the client.
But what about the insipid on-hold music I mentioned earlier? Don’t we need to transfer the customer to a media server to play this? Well, no, we don’t. This is the Web we’re talking about, and we can readily tell the client to play a video from YouTube, or wherever, while they are waiting.
Next, the web portal submits the not-yet-created call to the ACD for queuing, via the pseudo-IVR component. Key pieces of information submitted are the From URI, the customer ID and the queue corresponding to the reason for the call. When the call reaches the head of the queue, the ACD instructs the call to be transferred to the selected agent.
(Side-note: Pseudo-IVR adapters for contact centers are used for a variety of purposes. They may be used to dispatch social media “tweets”, inbound customer service emails and web-chat sessions, as well as WebRTC calls.)
For modern deployments, agent desktop software may be constructed from a web-based framework, which allows third-party plugin components to pull information from the customer database, to connect to a CRM system, and in our case, to connect to the co-browse server. The screen pop to the agent uses the customer URI to connect to the correct session on the co-browse server.
Making the call
Now that the customer and agent are both ready, the web portal instructs the WebRTC client to call the agent’s URI. The actual details of how this is done depend on the vendor-specific WebRTC signaling protocol supported by the gateway; however, on the SIP side of the gateway they are turned into the standard INVITE, with the SDP payload reflecting the media gateway’s IP address and RTP ports.
The fact that this is a video call is transparent to the ACD. The pool of agents with video capability can be put in their own skill group for the purposes of allocating them to customers using WebRTC. The agents could be using video on suitably equipped phone handsets, or they could themselves be using WebRTC clients. Indeed, some contact center vendors with whom I have spoken point to the advantages of delivering the entire agent experience within a browser: delivering updates to a SIP soft client or thick-client agent desktop software then becomes a thing of the past.
After the video session has been established, the customer may assent to the sharing of their mobile app screen or web-browsing session. The co-browse server acts as the rendezvous point, with the customer’s unique URI acting as the session identifier.
Concluding thoughts: It’s all about mobile, stupid!
The fact that WebRTC is not ubiquitous, that it is not currently supported in major browsers such as Safari and Internet Explorer, might be thought an insurmountable barrier to deploying it in a contact center. But this is not the case. The very same infrastructure that works for web browsers also works for mobile apps, which in many cases are simply mobile UI elements placed on top of a web application, making the same HTTP calls on a web server.
All that is required is a WebRTC-based SDK that works in Android or iOS. Happily for us, Google has made its WebRTC code readily available through the Chromium project. Several vendors have made that code the basis of their mobile SDKs, wrapping them with Java and Objective-C language bindings equivalent to the JavaScript APIs found in browsers.
For contact center executives, a mobile-first approach offers the following advantages:
- You don’t want your customers messing around trying to install WebRTC browser plugins for Safari and IE. If they’re going to download anything, it may as well be your mobile app.
- Mobile devices are near-ubiquitous. Both Pew and Nielsen report their popularity amongst older demographics in particular, where regular PCs might not be used.
- Microphones and cameras on mobile devices are near-universal and of excellent quality, and echo cancellation works well. That old PC with a flaky webcam? Perhaps not so much.
- If your customer is having a real-world problem, then the back-facing camera on a phone or tablet is a great way of showing it. The auto insurance industry comes readily to mind.
- Although the Great Video Codec Compromise now promises H.264 support in browsers, mobile SDKs have been able to take advantage of those devices’ hardware support for H.264 video encoding for some time. When your contact center agents have sleek enterprise-class, video-capable phones that don’t support VP8, you don’t want to have to buy a pile of servers simply to do video transcoding.
In the call center industry, Amazon and American Express have shown the way in supporting video in their tablet apps, and both these services use WebRTC under the hood. Speaking at the 2014 Cisco Live! event in San Francisco, Amex executive Todd Walthall related how users of the Amex iPad app who used the video feature had greater levels of customer satisfaction, through a more personal experience. This should not surprise us, as it’s much easier to empathize with a customer service representative if they’re not just a disembodied voice.
For companies deploying WebRTC, it’s an incremental approach that doesn’t require significant architectural change or the replacement of existing systems. Early adopters are seeing shorter calls, as context is preserved and co-browsing allows problems to be resolved more quickly. One day we will look back at IVR Hell, waiting on endless hold with only a lo-fi rendition of Mantovani for company, trying in vain to find our account number and PIN, as if it were a childhood nightmare.
{“author”: “Robert Welbourn“}
Dave says
Excellent job of laying the foundation and building on it. I found your article extremely easy to follow. Can a contact center using provider “A” for their IP Toll Fee use provider “B” to handle RTC incoming traffic?
Robert Welbourn says
Dave: There’s nothing stopping a contact center from having separate SIP trunks for IP Toll Free, and for WebRTC traffic that has been converted into regular SIP/RTP via a gateway that’s delivered as a Platform-as-a-Service (PaaS).
Let’s assume that our PaaS provider is hosting the signaling and media gateways. If you look at the second diagram in the post above, you’ll see that the signaling gateway interacts with the contact center’s web portal. What’s required here is for the PaaS to provide a secure API so that the web portal can allocate an identity (that is, a SIP URI) for the WebRTC caller, and request a token for the session.
When the RTC session is set up, the PaaS provider is going to have an SBC on their end of the SIP trunk, and the contact center will have their own. If the IP Toll Free provider also supplies an SBC as managed CPE, then most likely you will have to supply your own SBC for the WebRTC sessions.
Leon Thibeaut says
Nice article. Everyone is on the WebRTC bandwagon, it would seem. But I have been unable to find a vendor or solution which allows for basic call switching functionality of a WebRTC video call (i.e transfer to another agent). Is there a technical component I am missing which would explain this limitation?
Thanks