Current Projects List

Undersea Window
Project Overview

Detailed Description
Principal Researchers
Project Deliverables & Milestones
Technology Demonstrations
Press Coverage
Publications, Reports & Presentations

Undersea Window – High Definition Video Online

The goal of this project is to provide online live uncompressed high definition video from the seafloor and facilitate interactive remote control of the camera and transmission by using web services software . The expected result is that scientists, educators and the public will be able to control their own view of the undersea environment from wherever they are.

Innovation - Undersea Window

The Project will transmit live full broadcast standard high definition video from a camera on the undersea VENUS network, 100m below the surface of the Saanich Inlet on Vancouver Island, to scientists, educators and the public throughout Canada and around the world via CA*net 4 and inter-connected broadband networks. Deployment of the camera involves use of an undersea remotely operated vehicle (ROV) which must also attach the fibre-optic communication and power supply cable that interfaces with the VENUS network node. The Project will serve as a test bed for subsequent high definition video camera deployment on the NEPTUNE network.

VENUS is an ambitious project to conduct coastal oceanography in an innovative and informative way. It is a network of instruments in the ocean to observe the marine waters around southern Vancouver Island . For the first time, researchers will not have to wait for data from periodically recovered instruments - it will be delivered in real time. The subsequent NEPTUNE project will lay a 3,000 km network of powered fibre-optic cable, connected to a number of seafloor “laboratories,” or nodes, on the seabed over the Juan de Fuca tectonic plate. The instruments will be interactive — scientists will instruct them to respond immediately to events such as storms, plankton blooms, fish migrations, earthquakes, tsunamis, and underwater volcanic eruptions. The installation phase for both projects has been funded at $10.3M and $64.2M respectively.

The "Undersea Window" Project will use Web Services to interactively control the camera and video transmission from across the continent. The challenge is to minimize latency to enable camera control input, such as panning, in reaction to camera responses observed in the transmitted video stream. The planned SOA Software Architecture consists of three major service components: 1) the undersea camera control service, including all aspects of the pan/tilt unit as well as those of the camera itself; 2) the video transmission service which coordinates transmission of the high-definition video signal to requesting consumers; and 3) the monitoring service for other instruments and sensors on the seabed, not only to provide scientists with relevant non-video data, but also to initiate triggered requests of the camera in response to monitored sensor values, for example, due to a possible geological event or pressure change. It is expected that the Web Services software developed by the Project to control the high definition camera could later be adapted to control other instruments on the seafloor.

Functional Diagram (.pdf file)

The undersea portion of the transmission between the camera and the shore station will employ a dedicated fibre and modified bi-directional laser transmitter/receivers. From the shore station, video transmission will use the CA*net 4 network and existing video over IP software developed by McGill. Due to the high bandwidth requirement for high definition video transmission, lightpaths will be used. This will enable the testing and comparison of UCLP software developed by both Communications Research Centre/University of Ottawa and University of Quebec at Montreal .

The results of the project will be formally evaluated by scientific, educational and other users to ensure that the successes of the project can be clarified and made available to others.

Responsibilities and Collaborations

The Lead Contractor is McGill University . The partner organizations are the University of Victoria (VENUS/NEPTUNE Projects and LACIR) and FlexMet Technologies Inc., which is a spin-off company partly owned by the University of Victoria. All of the participants will be funded by the project. Both universities have many years of experience as major international research organizations and all of the expertise required to complete the proposed research successfully. The senior researchers have international reputations and many years of experience in their fields.

McGill will be responsible for all of the Web Services and video/data transmission software development work. It will also be responsible for selecting and modifying the camera, camera control and fibre-optic video/data transmission hardware. This will involve collaboration with other organizations which may subsequently make contributions to the project. Discussions have already been initiated with the Monterey Bay Aquarium Research Institute (MBARI). McGill also has established working relationships with Panasonic Inc. and Ikegami Inc., the two manufacturers most likely to supply the camera and control hardware. Work on the application specific fibre-optic transmission hardware will most likely be undertaken with Evertz Inc., a Canadian company that already has considerable expertise and products in this area.

The University of Victoria will be responsible for the collaborative development of the camera package: the underwater camera case, pan/tilt head, support mechanism and the optional bio-shutters. It will also be responsible for integrating the camera package with the undersea remotely operated vehicle (ROV) and with the VENUS Saanich Inlet underwater observatory node. The design, development and testing of the camera case, pan/tilt head, support mechanism and the optional bio-shutters will be done in collaboration with commercial entities that have experience in this area, preferably Canadian ones. Integration of the camera package with the underwater vehicle and the deploying and testing of the integrated camera and vehicle system will be done in collaboration with FlexMet Technologies Inc., owner of the vehicle. The vehicle's tether system will be modified to be compatible with the VENUS node and the vehicle's frame and body will be modified to accommodate the transport of the HD camera package in the vicinity of the node.

FlexMet Technologies will provide a suitable underwater vehicle and associated support infrastructure including technical and operational support.

There will be very close collaboration between McGill and the University of Victoria on the design of the camera case and pan/tilt head. Each group will be responsible for the pre-deployment testing of their components and the groups will collaborate on deployment and post-deployment testing and operation. Satlantic Inc. of Halifax will provide design and consulting services for the tether management system.

A small amount of funding has been allocated to two sub-contractors for supply of UCLP software, CRC/UOttawa and UQAM. They will work in collaboration with the software programmers at McGill. McGill also has close working relationships with the two ORANs involved, BCnet and RISQ. This will ensure that any network "last mile" problems that may be encountered can be dealt with quickly and efficiently. The contact at BCnet has been Michael Hrybyk and the contact at RISQ has been Philippe Groux.

McGill Participation

The McGill team expects to use the research results as part of its ongoing research on very high quality videoconferencing, low latency video, audio and data transmission over IP networks, and creating the feeling of "being there" in remote locations. The McGill team involves collaboration of the Instructional Multimedia Services which is responsible for the development of the video components, the Centre for Intelligent Machines of the Faculty of Engineering which is responsible for development of the transmission software, and the Centre for Interdisciplinary Research in Music Media and Technology of the Faculty of Music (CIRMMT) which is the umbrella research centre.

University of Victoria Participation

The "Undersea Window" is important to both the VENUS and NEPTUNE projects not only for the obvious opportunities for undersea scientific observation, but also for public outreach to inform, educate and increase support for this type of research. The University of Victoria team involves collaboration of the NEPTUNE project, the VENUS project and the Laboratory for Automation, Communication and Information Systems Research.

FlexMet Technologies Participation

FlexMet Technologies Incorporated is a private company incorporated in the Province of British Columbia and is owned 50% by Colin Bradley and 50% by the Innovation and Development Corporation which is in turn owned 100% by the University of Victoria . Gross annual income for the 2004 year was $116,080. It does not have any permanent employees and hires staff as required for specific projects. The company’s participation in this project will expand its experience in this rapidly developing area of underwater vehicle technology. This will help it to identify current needs and key technologies and better position itself to develop new products to exploit these needs.

Control of and Access to Project Results

Each participant will control the results of its research. McGill as lead contractor will ensure that the research results involving inter-group collaboration will be disseminated as widely as possible. All of the participants expect that the results of the research will be published in academic journals and disseminated at international conferences as well as Canarie workshops and on the project web site. The results of previous Canarie funded research at McGill were widely disseminated in this manner and also received extensive coverage in the popular press. See: http://www.canarie.mcgill.ca/press.htm. University of Victoria is the owner of the VENUS and NEPTUNE Canada observatories and is responsible for the distribution and dissemination of ocean data and imagery.

Technology Architecture and Implementation Plan

Our initial priority is to specify appropriate web services to be developed for camera control, video transmission session initialization, and sensor monitoring activities. This involves selecting appropriate development language(s), specifying the interfaces between components, and determining core functionality necessary for the first phase of implementation. We anticipate hiring of our web services programmer by October 1 st and proceeding from there to map out the details of the SOA Software Architecture to be used throughout the project. The high-level design should be completed during the second month of the project with implementation to follow immediately thereafter.

In addition, we will formulate a test plan for evaluating the early prototypes of the user interface to accesses these web services, including the choice of test user populations and core interface requirements. This should be completed by the end of the first month of the project, with mock-ups designed during the second month. Further details of the work plan are included below.

Camera Control

The first major service component is responsible for control of the undersea camera, including all aspects of the pan/tilt unit as well as those of the camera itself (e.g. gain, colour balance, iris setting, etc.). This component will receive requests from the two other components and respond according to a priority table and the current schedule. Because the camera is a shared resource, components may specify maximum wait (delay until their request is serviced) and minimum hold (interval during which no conflicting requests should be executed) times for each request in order to provide maximum flexibility of operations and permit advanced scheduling for users who need exclusive control for a block of time. If these parameters are not specified, the request will be executed immediately, providing it does not conflict with any other current operation, and otherwise it will be refused.

The user interface for this component is a key factor in usability of the overall system. It must combine aspects of video display (likely at low-fidelity) in order to provide a visual context for the operations available, offer state information and feedback concerning operations in progress or queued, as well as information regarding other users in the event of distributed, coordinated research needs. As such, we will begin with mock-ups and low-fidelity prototyping to gain early feedback from candidate users. As it is unlikely that these users will have experience with existing tools (since underwater cameras with remote control capability are somewhat of a novelty) against which they can compare, it may be advantageous to consider parallel testing with a user community that is more familiar with teleoperation tasks.

Once mock-up testing is completed, we will begin deploying some of the camera-control functionality immediately on conventional pan-tilt-zoom cameras (e.g. our Sony EVI-series cameras), installed in our laboratory facility, for which the interface could be made publicly available for test purposes. To maximize the pool of test subjects, initial testing of this component might provide a low-bandwidth video stream to clients through QuickTime or RealPlayer compression. Later, this would evolve to include session control for the Ultra-Videoconferencing transport, discussed below. Ideally, one of our cameras could be deployed during certain periods in a virtual underwater environment (e.g. the Shared Reality Environment, with projected underwater scenery) so that oceanography scientists could comment meaningfully on their experience with the camera control interface. This will permit us to gather additional user feedback at an early stage of development and make revisions as appropriate.

Video Transmission

The second component provides the video transmission service. It coordinates transmission of the high-definition video signal to requesting consumers. Since the underlying software is capable of effecting various image processing operations, including compression, and can transmit either temporally or spatially sub-sampled frames as well as select regions of interest as per the subscriber's requirements, this component represents a substantial interface to the previously developed Ultra-Videoconferencing software.

Architecturally, we anticipate that the web services functionality for this component will replicate much of the current front-end to our transmission software (the 'uv' launch script, written in Perl). This permits the server (service provider) to advertise its detailed capabilities to requesting clients (consumers) so that the user need only generate a high-level request to initiate audiovisual streaming. Clearly, in migrating to web services, our schema for this component would include a description of the services offered by the camera, information on how to request initiation or termination of a stream and a specification of the data format carrying video and control information.

It is anticipated that user control over such parameters will be merged with the previously discussed interface for camera control, in order to offers users a unified interface for all control aspects of the Undersea Window. However, beyond the operation of connection setup, this likely represents a secondary level of functionality that should be left for a second phase of user interface design and development.

With respect to the overall SOA architecture, requesting components may have differing requirements with respect to delivery guarantees (e.g. best effort, minimum latency, or fully reliable) that must be taken into account. As the transmission is likely to involve serial unicasting to multiple requesters and may evolve to provide multicast capability, this component will be situated on a host (likely on-shore) receiving the full quality original video signal and possessing sufficient processing and bandwidth capability to service several distinct requests simultaneously. Video stream capability should ideally be left for discovery during operation rather than pre-specified, since this might change depending on camera model and other computational and bandwidth demands on the video component. These aspects of video transmission will be addressed in later stages of the project as the underlying transmission software evolves in parallel with the web services implementation.

Instruments and Sensors

A final component monitors other instruments and sensors on the seabed, not only to provide scientists with relevant non-video data, but also to initiate triggered requests of the camera in response to monitored sensor values, for example, due to a possible geological event or pressure change. There is also the possibility of augmenting the video stream with relevant data obtained from the sensor component. An important architectural question is whether such data integration should be performed directly at the video transmission component, which would reduce transmission requirements and concern for clock skew or variable network packet interleaving, or be left to the consumer components located at the user endpoints. In a similar manner, future development might include such superposition on the video of feedback from an acoustic positioning system on a robotic vehicle that would allow position information to be conveyed in addition to live video while the camera is being relocated.

Security and Bandwidth Verification

Since the software will operate over CA*net4, each component will require some form of security mechanism to ensure that only permitted consumers may access the resources. Obviously, stricter controls over the camera component will be required than over the video or sensor streams, as the latter can be shared with non-critical users. Bandwidth availability for requesting consumers might be verified by a sub-component of the video transmission component in order to provide improved guarantees on reliable delivery or to negotiate a reduced bandwidth stream, as appropriate. Because these aspects of the architecture are hidden from the user, these can be designed and developed without formal HCI methods and as such, should proceed in conjunction with development of the previous components. Security mechanisms will be engineered into the system from the outset, whereas the less critical aspects of bandwidth verification can be added at a later stage in the project.