News Archive (1999-2012) | 2013-current at LinuxGizmos | Current Tech News Portal |    About   

Video chat software turns users into live avatars

Nov 16, 2005 — by LinuxDevices Staff — from the LinuxDevices Archive — 2 views

Oki Electric Industry Co. Ltd. has developed technology that can add animated faces to instant messaging, networked gaming, and other real-time communications used on mobile phones and PCs. Oki's “FaceCommunicator” software leverages technology similar to the company's target=”new”>face recognition software that recognizes handhelds' owners. The technology supports Linux-based mobile phones.

FaceCommunicator is touted as useful for maintaining privacy and security during first time “face-to-face” communications over video phones, mobile phones, or in IM or chat-room chats on the Internet. In addition, the facial animations let users express emotions that might be hard to express in words, Oki says.

The technology can take advantage of four sources of user input to generate and control its transmitted animated faces — video images from PC or mobile phone cameras; voice; text; and mouse/keyboard commands.

Both the animated face and a background image can be selected by the user to suit the need of the moment. Additionally, certain of the animated faces can move their eyebrows and mouths as though talking, which adds a virtual reality dimension to communications, according to Oki.

Modes of operation

The company describes the following four FaceCommunicator operating modes:

  • Image recognition — This most interesting mode is based on what Oki calls “expression animation technology,” which the company says is similar to the technology used to make animated movies. This mode uses Oki's image recognition technology to detect movement of the user's eyes, eyebrows, and mouth, and synchronizes the transmitted face animation to the user's actual facial movements. The result is a sort of animated proxy for the user's live face, as illustrated in the following animated graphic:


    Example of image recognition-controlled face animation
    (Source: Oki Electric Industry Co. Ltd.)

  • Voice synchronization — Another interesting mode of operation is when FaceCommunicator simply synchronizes the transmitted face to the user's live voice. The transmitted animated face will appear to be speaking, but without requiring a video camera for image acquisition and recognition.
  • Text-to-speech — FaceCommunicator can also use a text-to-speech function to generate both speech and an accompanying synchronized animated face that appears to be doing the talking. The user simply types a message, and FaceCommunicator does the rest.
  • Keyboard/mouse control — Finally, users can directly control the expression and motion of a transmitted animated face by means of keyboard and mouse commands, without depending on either video recognition, voice, or text-to-speech technologies.

Note, however, that the voice synchronization and text-to-speech modes described above are not currently supported in the Embedded Edition (mobile version) of FaceCommunicator.

What makes it tick?

According to Oki, FaceCommunicator consists of two software modules — Expression Generator and Animation Generator, as illustrated in the following architecture diagram.


FaceCommunicator architecture
(Source: Oki Electric Industry Co. Ltd.)

Expression Generator determines how the aspects of the face (eyes, eyebrows, lips, etc.) should be changed, and sends appropriate parameters to Animation Generator, Oki explains. Animation Generator, in turn, generates the animated face by moving and otherwise altering the aspects of the face according to the parameters it receives from the Expression Generator, resulting in changes in expression.

System requirements

Oki has developed two versions of FaceCommunicator, one for standard PCs and one for handhelds. Their respective key requirements are listed as:

  • FaceCommunicator Broadband Edition — requires an Intel Celeron processor clocked at 1GHz or faster (Pentium 4 1.8GHz or faster preferred) running Windows Windows 2000 Professional (SP4 or later) or Windows XP Home or Pro (SP1 or later). The system needs at least 256MB DRAM and a graphics subsystem with at least the performance of nVIDIA's GeForce FX5200 is recommended, among other requirements.
  • FaceCommunicator Embedded Edition — this mobile version of FaceCommnicator requires an ARM9 processor clocked at 100MHz or higher (200MHz preferred), running Linux, Symbian, or uITRON, among other requirements.

Oki presumably offers a software development kit (SDK) that can be used by device makers, service providers, and software vendors to integrate FaceCommunicator into their products. However, the company had not responded by publication time to requests for clarification regarding how the technology is integrated into consumer products, and the associated costs.

Availability

FaceCommunicator was developed by the Sensing Solutions Development Department of the Business Incubation Division of Oki's Systems Network Business Group. The company debuted FaceCommunicator at the Broadband World Forum Asia 2005 in May of this year. In June, a website devoted to promoting the technology was launched.

Further details, compatibility requirements, and contact information are available on Oki's FaceCommunicator website.


 
This article was originally published on LinuxDevices.com and has been donated to the open source community by QuinStreet Inc. Please visit LinuxToday.com for up-to-date news and articles about Linux and open source.



Comments are closed.