|Interactions and Communication Design Lab.||Social robotics, human-robot interaction, cognitive science incommunication, learning science|
|Social robotics, human-robot interaction, cognitive science incommunication, learning science|
We are conducting research on the underlying mechanisms of our everyday communication from multidisciplinary perspectives, cognitive science, ecological psychology, socio-cultural approaches and cognitive robotics.
Social robotics and human-dependent robots
One of our research projects is developing sociable artificial creatures, such as Sociable Trash Box, Talking-Ally, Muu, Makono-te and Pelat, which can be carrying vehicles for studying social robotics and for investigating social organization and developmental processes in the relationships between human and social artifacts.
Cognitive sience in communication
Our research focuses not only on building sociable artificial creatures as applications to novel human interfaces and social learning environments, but also on social intelligence and communications emerging from the interaction between sociable creatures and social others.
Learning science and learning environment design
Our research findings give sociable artifacts social skills so as to be social entities in human-inhabited environments, and they can be used for novel collaborative learning environments and for learning and communication skills training schemes for children.
|Visual Agent Laboratory||Style informatics of images, visible light communications with image sensors,smart lighting and illumination, computer graphics andimage-related applications, interaction technology via visual media|
|Style informatics of images, visible light communications with image sensors,smart lighting and illumination, computer graphics andimage-related applications, interaction technology via visual media|
We are developing novel visual applications for exploring new media environments powered by smart visual agents, based on images or graphics technologies such as: style-based retrieval and conversion of illustrations and fonts, smart digital lighting, and visible light communications with image sensors and visual codes.
Style-based retrieval and manipulations of illustrations
This research focuses on retrieval and ranking methodologies based on the drawing styles of illustrative images. The intuitive and automatic conversions and transformations of styles are also investigated for large font data.
Image-based monitoring and controls are developed with a mobile lighting robot. Smart graphical data mapping for decorative illumination is also developed. An image synthesis for authentically restoring color lighting scenes is developed, using smart conversion of HDR images on the basis of the color appearance model of the human vision system.
Visual communications for IoT
Visible light communications with color LEDs are developed on the basis of optical pattern recognition, using the image sensors of smartphones or tablets. Image sensor communication using graphical representation is also developed for digital signage systems in public spaces. These image-based interaction technologies will be utilized in the forthcoming Internet of Things (IoT) era.
|Active Intelligent Systems Lab.||Intelligent robots, mobile robots, robot vision, scene recognition, action and recognition planning, person detection and tracking,human-robot interaction|
|Intelligent robots, mobile robots, robot vision, scene recognition, action and recognition planning, person detection and tracking,human-robot interaction|
We are conducting research projects on intelligent robots and systems that can operate autonomously in complex real environments. Project themes include visual scene recognition, on-line action and recognition planning, and human-robot interaction.
Attendant robots / service robots
We are developing autonomous mobile robots with scene recognition and action planning capabilities. They are expected to provide personal service tasks such as guiding, attending, and fetching.
Outdoor navigation / self-driving
We are developing methods for outdoor navigation such as multi-sensory road boundary tracking and view-based localization. These methods can be applied not only to mobile robots but also to self-driving cars.
Vision-based manipulation / human-robot interaction
We are conducting projects on visually-guided manipulation, tele-operated robots, and human-robot collaborative operation.
|Computer Vision and Image Processing Lab.||3-D reconstruction, image matching, omnidirectional projection systems,safety systems for vulnerable road users|
|3-D reconstruction, image matching, omnidirectional projection systems,safety systems for vulnerable road users|
Computer vision is a very important tool for obtaining various information of a scene, such as 3-D shapes, 2-D shapes, moving information of objects, and colors of objects and lights. In particular, 3-D shape information is very useful for us. However, the accuracies of the reconstructed 3-D shapes are often very low when we compute them using real scene data. Thus, we study accurate methods of matching images and reconstructing 3-D shapes with real images. We also study computer vision applications for the real world.
Accurate image matching
Accurate 3-D reconstruction from images / video
Omnidirectional projection system
Safety system for vulnerable road users
Image enhancement for dichromate
|Image Information and Image Media Lab.||Computer vision, 3-D reconstruction, mixed reality system,ellipse fitting and detection|
|Computer vision, 3-D reconstruction, mixed reality system,ellipse fitting and detection|
We developed 3-D scene analysis methods from images or video sequences and their applications, for example, a mixed reality system, and a diminished reality system.
Mixed reality system using circular markers
We developed a circular marker for a mixed reality system that is robust against occlusion of the marker.
We developed a diminished reality system by using a marker and a 3-D model reconstructed from images.
Ellipse fitting and detection
We study robust and accurate ellipse fitting and detection methods, and develop their applications.
|Ubiquitous Systems Lab.||Sensor-actuator networks, wearable systems, context-aware systems, human-robot interaction|
|Sensor-actuator networks, wearable systems, context-aware systems, human-robot interaction|
We are studying systems that support human life by linking the real and virtual world. On the basis of a sensor-actuator (robot) network that consists of environmental and wearable devices, we study methods for constructing an efficiently connecting network, algorithms for obtaining precise user context, and strategies for supporting users' activities, and designing and implementing systems for improving the happiness and safety of our daily life.
In order to construct an environment where a system can estimate precise human activities and robots work effectively, we are building some sensor and robot devices and studying network architectures on which devices can collaborate with each other efficiently.
Activity data mining
On the basis of statistical estimation, we study algorithms to obtain users' context, algorithms to summarize and verify large amounts of context, and algorithms to find information necessary for supporting users' activities.
Ambient robot interface
Dealing with several devices surrounding us as networked and personified robots, we design and implement interfaces that give feelings of friendliness, fun, and safety to users by exploiting information obtained from the themes described above.