An all-IP studio production environment opens up the opportunity to create more powerful production tools for the live workflow.
Seamlessly integrating software processes into the live environment provides production directors with access to a wider range of applications. These may include inherently non real-time processes such as time reversal, creative features such as content-related graphics, or complex effects generation such as content speed-up/slow-down.
In this paper, we use the example of a synthetic slow motion application to show how an inherently non real-time process can be integrated into a live IP production environment.
This work was carried out within the context of the AMWA Networked Media Incubator (NMI) project (www.amwa.tv/nmi/). The InSync synthetic slow motion software was demonstrated at a workshop organised by NMI leaders BBC, in which we proved interoperability with a reference system and with other vendors’ studio equipment.
The move to all-IP production has generated a lot of interest from broadcasters as a means of both saving costs and enabling innovation in production methods.
Recent experimentation, such as that reported by the BBC (1), has shown that removing the constraints of SDI content transfer can lead to new production workflows which allow support for software-based applications.
The use of software applications, rather than dedicated proprietary hardware, introduces greater flexibility in the features and functionality which can be made available to production professionals, and allows for more rapid upgrade and development of improvements and enhancements to such applications.
During the transition from SDI to IP-based facilities, content producers will operate in hybrid system architectures, where both native IP and legacy SDI equipment are in use.
The hybrid environment is facilitated by provision of edge devices such as SDI to IP and IP to SDI interfaces, as shown in Figure 1. Such a system may incorporate software and hardware processing equally - as long as the required functionality is accessible and controllable, it does not matter how it is implemented.
In the example illustrated in Figure 1, we may imagine a hybrid production environment where hardware sources, such as cameras and file servers, stream real-time IP- encapsulated content via multicast to a media network, where other devices such as a vision mixer, standards converter or signal monitor, may be present in hardware or software.
However, integration of inherently non real-time processes into the live IP production environment does not immediately appear to be straightforward.
Visually interesting processes such as time reversal, creative features such as content-related graphics, or complex effects generation such as time speed-up/slow-down, as currently used in live production, rely on a storage/process/playback paradigm in order to be integrated into the programme as it is produced. Provision of such functionality in software requires careful architectural thinking in order to be efficiently implemented and user-friendly.
Fortunately, within the AMWA Networked Media Incubator (NMI) project (www.amwa.tv/nmi/) an open specification was developed (www.nmos.tv/), which built on the Joint Task Force on Networked Media Reference Architecture, the use of which enabled InSync to integrate a fundamentally non real-time application (synthetic slow motion) into a live IP production environment.
The Networked Media Open Specifications (NMOS) are a family of specifications covering the various aspects of Discovery and Registration (currently AMWA Proposed Specification IS-04) and In-stream Identity and Timing which were implemented and proven within the AMWA NMI.
More information about the NMI and NMOS may be found in (2) and (4), however, the aspects of NMOS which are relevant to this paper are the logical data model and the addition of relationships and time-based information to content and broadcast equipment.