This Science News Wire page contains a press release issued by an organization and is provided to you "as is" with little or no review from Science X staff.

6G vision: The future of wireless communications can be seen through your phone's camera lens

November 11th, 2022
6G vision: The future of wireless communications can be seen through your phone's camera lens
Summer campers work on their projects as part of the C-Tech2 programming in Blacksburg. Credit: Chelsea Seeber for Virginia Tech.

Our mobile devices have become much more than a means of communication. They navigate us through rush hour traffic, allow us to have groceries delivered our doors, serve as compact and high-quality cameras, and store countless documents and photos.

Researchers at the Bradley Department of Electrical and Computer Engineering (ECE) want to take the multi-use function of our cellular devices even further with the idea of a "vision-guided wireless system."

Professors Walid Saad and Harpreet Dhillon of Virginia Tech's College of Engineering have been awarded a $1 million grant from the National Science Foundation (NSF) to develop technologies that use our current wireless systems, as well as emerging wireless systems such as 6G, to "view" and map the surrounding environment.

This mapping of a user's surroundings improves communication by identifying obstacles that may interfere with radio frequency signals.

"In 6G, we talk about high-frequency bands like terahertz [THz]," said Saad. "These high frequencies can deliver high rates and high bandwidth, but the problem is that the signals are susceptible to blockages—much more so than low frequencies. Those frequencies can be blocked by things like your arms moving, or someone standing in a room with you."

Although these blockages might seem irritating, they could actually be the key to helping researchers improve communication.

"If a communication system fails because the signal is blocked, at sub-THz bands, we can still use that information to sense the environment and know that there was an obstacle in the first place," said Saad. "Then, with both situational awareness and other side information—like a picture of the room—we can use that multimodal data to communicate better."

This novel research will develop a framework that merges tools from machine learning, communication theory, distributed split learning, and optimization. The goal is to use information from both wireless and nonwireless sources to optimize vision-guided systems.

These vision-guided systems can be used in several applications, such as enhancing the performance of tomorrow's wireless systems (6G), creating a more advanced and interactive gaming environment, and pushing the boundaries of extended reality while exploring the metaverse.

To fuse all of the research's information, Saad and Dhillon will use machine learning and artificial intelligence along with a machine learning model called a transformer.

Typically, transformers are used for language processing in the world of artificial intelligence (AI). A translation software would be a good example of the use of this technology. Using this type of model for vision-guided communication research is a new approach that the group is excited to explore.

The project team will use the transformer to combine their data (wireless and nonwireless) to find the best system configuration, thereby understanding what creates a better communications strategy. Images in particular will be the key additional pieces of information needed to improve communication. "If we know more, we can do more," said Saad.

Dhillon, an ECE professor, Elizabeth and James E. Turner Jr. '56 Faculty Fellow, and the associate director of Wireless@VT, is optimistic about the research and its potential impact for wireless in an already futuristic landscape.

"Mobile wireless devices have gradually transformed from mere communications devices into powerful computing platforms with a multitude of sensors, such as cameras and radars," said Dhillon. "In fact, when we shop for a new phone, the main considerations are its camera quality, processing speed, memory, and sensors, whereas hardly anyone checks its frequency bands. Since this is one of the first efforts to do a comprehensive analysis of vision-guided wireless systems, this is expected to have a significant impact on future generations of wireless."

When it comes to using imaging in this type of research, Finnish researcher Mehdi Bennis from the University of Oulu was an obvious choice for expertise. Bennis is internationally known for his work in edge intelligence, the combination of AI and edge computing. He has published several papers related to vision-guided wireless systems and how images can improve wireless. The Finnish researcher is a longtime collaborator of Saad's and can provide guidance and expertise across multiple stages of the project.

"This proposal explores a very important and timely problem, which is how to leverage and fuse multiple pieces of information, such as images, to better optimize wireless resources and network deployments," said Bennis. "If successful, this will help reduce some of the network operational and deployment costs for both vendors and consumers."

As a team, Saad, Dhillon, and Bennis will work closely on all the research thrusts. Bennis will extract useful information from the data, Dhillon will use the information to develop efficient communication strategies, and Saad will incorporate them in network optimization.

"I enjoy how, in a very short discussion session—usually a single meeting—we can quickly dissect a very complex problem and convert it into a holistic research agenda that pushes the boundaries of our field," Saad, an IEEE fellow, said about his working relationship with Dhillon. "We approach the same problems from very different, yet very synergistic, perspectives, which makes it very enjoyable to brainstorm ideas and chart out visionary projects."

As part of the NSF funding, the team will also implement an educational outreach program for K-12 students using a hands-on learning platform. The goal is to introduce students to the basics of communications and sensing and then potentially expand that platform to integrate the idea of vision-guided communication. The researchers plan to incorporate the platform at C-Tech2 and Imagination summer camps held at Virginia Tech and geared toward middle school and high school female students.

In addition, each of the principal investigators will be implementing the research into existing courses and lectures for students at all levels.

"I am particularly excited for the opportunity to push the boundaries simultaneously in wireless and machine learning. Our methods will dramatically change how AI-driven wireless systems protocols are designed, and it can have a multitude of future applications in the 6G space," said Saad.

Dhillon added, "Using vision information for improving wireless communications performance is a fascinating idea. Forward-looking projects like this one are important for further enhancing Virginia Tech's reputation as the premier place for wireless communications research."

Provided by Virginia Tech

Citation: 6G vision: The future of wireless communications can be seen through your phone's camera lens (2022, November 11) retrieved 14 April 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.