Advances in machine learning and AI unlock myriad of applications

The July 2021 issue of IEEE/CAA Journal of Automatica Sinica features six articles that showcase the potential of machine learning in its various forms. The applications described in the studies range from advanced driver assistance systems and computer vision to image processing and collaborative robotics.

Advances in machine learning and AI unlock myriad of applications

Image Credit: IEEE/CAA Journal of Automatica Sinica

Automation of technology has reshaped both the way in which we work and how we tackle problems. Thanks to the progress made in robotics and artificial intelligence (AI) over the last few years, it is now possible to leave several tasks in the hands to machines and algorithms.

To highlight these advances, the IEEE and the Chinese Association of Automation (CAA) decided to join forces, in the first issue of IEEE/CAA Journal of Automatica Sinica. This journal is among the top 7% ones in artificial intelligence, control/systems engineering, and information systems (ranked by CiteScore), with high-quality papers on all areas of automation science and engineering. In the July 2021 issue, the journal features six articles covering innovative applications of AI that can make our lives easier.

The first article, authored by researchers from Virginia Tech Mechanical Engineering Department ASIM Lab, USA, delves into an interesting mixture of topics: intelligent cars, machine learning, and electroencephalography (EEG). Self-driving cars have been in the spotlight for a while. So how does EEG fit in this picture?

Sometimes drivers become distracted or fatigued without realizing it, increasing the risk of a traffic accident. Fortunately, cars can now be equipped with AI systems that sense and analyze the driver’s EEG signals to constantly monitor their state and issue warnings when deemed necessary. This article reviews the latest EEG-based driver state estimation techniques. They also provide detailed tutorials on the most popular EEG decoding methods and neural network models, helping researchers become familiarized with the field. The authors explain, “By implementing these EEG-based methods, drivers’ state can me estimated more accurately, improving road safety.”

Next, a research team from Sichuan University, China, propose a new approach for image captioning, a task that is difficult for computers. The problem is that even though computers can now aptly recognize objects in a given image, it is tricky to describe the scene solely based on these objects. To tackle this, the researchers developed a global attention-based network to accurately estimate the probabilities of a given region in the image of being mentioned in the caption. This was achieved by analyzing the similarities between local visual features and global caption features. Using an attention module, the model can more accurately attend to the most important regions in the image to produce a good caption. Automatic image captioning is a great tool for indexing large images datasets and helping the visually impaired.

In the third article, scientists of Xidian University, China, attempt to bring collaborative robotics to the field of top-view surveillance. More specifically, they propose a detailed framework in which deep learning is used in top-view computer vision, contrary to most studies that focus on frontal-view images. This framework uses a smart robot camera with an embedded visual processing unit with deep-learning algorithms for detection and tracking of multiple objects (essential tasks in various applications, including crime prevention and crowd and behavior analysis).

In the fourth article, researchers from Guiling University, China, propose a new approach for producing super-resolution images based on features that a neural network can extract and use. Their method, called weighted multi-scale residual network, can leverage both global and local image features from different scales to reconstruct high-quality images with state-of-the-art performance. The authors say, “Current imaging devices certainly cannot provide enough computing resources, and thus, we designed a fast and lightweight architecture to mitigate this problem.”

The fifth article by researchers from the University of New South Wales, Australia, covers the complex topic of transparency and trust in human–swarm teaming. According to the authors, explainability, interpretability and predictability are distinct yet overlapping concepts in artificial intelligence that are subordinate to transparency. By drawing from the literature, they proposed an architecture to ensure trustworthy collaboration between humans and machine swarms, going beyond the usual master–slave paradigm. The researchers conclude, “Human-swarm teams will require increased levels of transparency before we can begin to leverage the opportunity that these systems present.”

Next, scientists from the University of Electronic Science and Technology of China showcase yet another use of deep neural networks in the field of computer vision— more specifically, in video anomaly detection. Existing models for automatically detecting anomalies in video footage try to predict or reconstruct a frame based on previous input and, by calculating the reconstruction error, determine if anything seems out of place. The problem with this approach is that abnormal frames are sometimes reconstructed well, leading to false negatives. The scientists tackled this problem by developing a cognitive memory-augmented network that imitates the way in which humans remember normal samples and uses both reconstruction error and calculated novelty scores to detect anomalies in videos. With verified state-of-the-art performance, the network can be readily applied in surveillance tasks, such as accident and public safety monitoring.

We are all very likely to witness artificial intelligence becoming pivotal in many real-life applications soon. So, make sure to keep up with the times by checking out the July 2021 issue of IEEE/CAA Journal of Automatica Sinica!

Source:
Journal references:
  • Zhang, C & Eskandarian, A., (2021) A Survey and Tutorial of EEG-Based Brain Monitoring for Driver State Analysis. IEEE/CAA Journal of Automatica Sinica. doi.org/10.1109/JAS.2020.1003450.
  • Liu, P., et al. (2021) Global-Attention-Based Neural Networks for Vision Language Intelligence. IEEE/CAA Journal of Automatica Sinica. doi.org/10.1109/JAS.2020.1003402.
  • Ahmed, I., et al. (2021) Towards Collaborative Robotics in Top View Surveillance: A Framework for Multiple Object Tracking by Detection Using Deep Learning. IEEE/CAA Journal of Automatica Sinica. doi.org/10.1109/JAS.2020.1003453.
  • Sun, L., et al. (2021) Lightweight Image Super-Resolution via Weighted Multi-Scale Residual Network. IEEE/CAA Journal of Automatica Sinica. doi.org/10.1109/JAS.2021.1004009.
  • Hepworth, A.J., et al. (2021) Human-Swarm-Teaming Transparency and Trust Architecture. IEEE/CAA Journal of Automatica Sinica. doi.org/10.1109/JAS.2020.1003545.
  • Xu, X., et al. (2021) A Cognitive Memory-Augmented Network for Visual Anomaly Detection. IEEE/CAA Journal of Automatica Sinica. doi.org/10.1109/JAS.2021.1004045.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Gut microbiome during acute infection predicts risk of long COVID