Augmented Reality Archives - AiThority https://aithority.com/category/technology/augmented-reality/ Artificial Intelligence | News | Insights | AiThority Mon, 08 Jan 2024 10:42:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png Augmented Reality Archives - AiThority https://aithority.com/category/technology/augmented-reality/ 32 32 Wimi Introduces Image-Fused Point Cloud Semantic Segmentation With Fusion Graph Convolutional Network https://aithority.com/machine-learning/wimi-introduces-image-fused-point-cloud-semantic-segmentation-with-fusion-graph-convolutional-network/ Mon, 08 Jan 2024 10:39:58 +0000 https://aithority.com/?p=556146 WiMi Announced Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider, announced an image-fused point cloud semantic segmentation method based on fused graph convolutional network, aiming to utilize the different information of image and point cloud to improve the accuracy and efficiency of semantic segmentation. Point cloud data is very effective in representing the […]

The post Wimi Introduces Image-Fused Point Cloud Semantic Segmentation With Fusion Graph Convolutional Network appeared first on AiThority.

]]>
WiMi Announced Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider, announced an image-fused point cloud semantic segmentation method based on fused graph convolutional network, aiming to utilize the different information of image and point cloud to improve the accuracy and efficiency of semantic segmentation. Point cloud data is very effective in representing the geometry and structure of objects, while image data contains rich color and texture information. Fusing these two types of data can utilize their advantages simultaneously and provide more comprehensive information for semantic segmentation.

AIThority Predictions Series 2024 banner

Recommended AI News: Innovation of Audio: Openrock X by Oneodio Unveiled at CES 2024

The fused graph convolutional network (FGCN) is an effective deep learning model that can process both image and point cloud data simultaneously and efficiently deal with image features of different resolutions and scales for efficient feature extraction and image segmentation. FGCN is able to utilize multi-modal data more efficiently by extracting the semantic information of each point involved in the bimodal data of the image and point cloud. To improve the efficiency of image feature extraction, WiMi also introduces a two-channel k-nearest neighbor (KNN) module. This module allows the FGCN to utilize the spatial information in the image data to better understand the contextual information in the image by computing the semantic information of the k nearest neighbors around each point. This helps FGCN to better distinguish between more important features and remove irrelevant noise. In addition, FGCN employs a spatial attention mechanism to better focus on the more important features in the point cloud data. This mechanism allows the model to assign different weights to each point based on its geometry and the relationship of neighboring points to better understand the semantic information of the point cloud data. By fusing multi-scale features, FGCN enhances the generalization ability of the network and improves the accuracy of semantic segmentation. Multi-scale feature extraction allows the model to consider information in different spatial scales, leading to a more comprehensive understanding of the semantic content of images and point cloud data.

This image-fused point cloud semantic segmentation with fusion graph convolutional network is able to utilize the information of multi-modal data such as images and point clouds more efficiently to improve the accuracy and efficiency of semantic segmentation, which is expected to advance machine vision, artificial intelligence, photogrammetry, remote sensing, and other fields, providing new a method for future semantic segmentation research.

Recommended AI News: Anviz to Launch AI-Boosted Security Products at Intersec Expo, Dubai

This image-fused point cloud semantic segmentation with fusion graph convolutional network has a wide range of application prospects and can be applied in many fields such as autonomous driving, robotics, and medical image analysis. With the rapid development of autonomous driving, robotics, medical image analysis and other fields, there is an increasing demand for processing and semantic segmentation of image and point cloud data. For example, in the field of autonomous driving, self-driving cars need to accurately perceive and understand the surrounding environment, including semantic segmentation of objects such as roads, vehicles, and pedestrians. This image-fused point cloud semantic segmentation with fusion graph convolutional network can improve the perception and understanding of the surrounding environment and provide more accurate data support for decision making and control of self-driving cars. In the field of robotics, robots need to perceive and understand the external environment in order to accomplish various tasks. Image fusion point cloud semantic segmentation with fusion graph convolutional network can fuse image and point cloud data acquired by robots to improve the ability to perceive and understand the external environment, which helps robots to better accomplish tasks. In the medical field, medical image analysis requires accurate segmentation and recognition of medical images to better assist medical diagnosis and treatment. The image-fused point cloud semantic segmentation with fusion graph convolutional network can fuse medical images and point cloud data to improve the segmentation and recognition accuracy of medical images, thus providing more accurate data support for medical diagnosis and treatment.

In the future, WiMi research will further optimize the model structure. At the same time, the model will be combined with deep learning technology to take advantage of deep learning technology to improve the performance of the model. And further develop the multi-modal data fusion technology to fuse different types of data (e.g., image, point cloud, text, etc.) to provide more comprehensive and richer information and improve the accuracy of semantic segmentation. WiMi will continue to improve the real-time processing of the image-fused point cloud semantic segmentation with fusion graph convolutional network capability to meet the demand.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

[To share your insights with us, please write to sghosh@martechseries.com]

The post Wimi Introduces Image-Fused Point Cloud Semantic Segmentation With Fusion Graph Convolutional Network appeared first on AiThority.

]]>
ARound and Immersal Team Up to Revolutionize Augmented Reality in Sports and Live Entertainment https://aithority.com/technology/around-and-immersal-team-up-to-revolutionize-augmented-reality-in-sports-and-live-entertainment/ Thu, 04 Jan 2024 15:10:07 +0000 https://aithority.com/?p=555757 ARound and Immersal Team Up to Revolutionize Augmented Reality in Sports and Live Entertainment

AR Leaders Collaborate to Develop Next-Level Shared Experiences in WebAR for Venues Worldwide ARound, the pioneering shared augmented reality (AR) platform, part of Stagwell, is excited to announce a groundbreaking partnership with Immersal, leaders in spatial computing and AR localization technology, and part of Hexagon. By creating a turn-key WebAR solution for stadium AR, this partnership facilitates easier […]

The post ARound and Immersal Team Up to Revolutionize Augmented Reality in Sports and Live Entertainment appeared first on AiThority.

]]>
ARound and Immersal Team Up to Revolutionize Augmented Reality in Sports and Live Entertainment

AR Leaders Collaborate to Develop Next-Level Shared Experiences in WebAR for Venues Worldwide

ARound, the pioneering shared augmented reality (AR) platform, part of Stagwell, is excited to announce a groundbreaking partnership with Immersal, leaders in spatial computing and AR localization technology, and part of Hexagon. By creating a turn-key WebAR solution for stadium AR, this partnership facilitates easier integration of shared AR experiences for teams, venues, and events, broadening the scope of interactive fan engagement. ARound and Immersal are poised to announce their inaugural collaboration with a major sports league next month, marking a significant milestone in bringing this innovative vision to life.

AIThority Predictions Series 2024 banner

AIThority.com Special Bulletin: Weekly AiThority Roundup: Biggest Machine Learning, Robotic And Automation Updates

This partnership combines ARound’s connected, shared AR technology that has transformed live fan experiences for professional sports teams across three professional leagues – MLB, NBA, NFL – including the Minnesota Twins, Los Angeles Rams, Kansas City Royals, and the Cleveland Cavaliers, with Immersal’s visual positioning system (VPS) that creates centimeter-accurate, large-scale indoor and outdoor AR experiences.

“This partnership is a game-changer in the world of sports and live entertainment as we collaborate to make stadium AR experiences more accessible and ubiquitous to all fans and types of events,” said Josh Beatty, founder and CEO, ARound. “By integrating our fan engagement platform with Immersal’s robust localization technology, we can seamlessly create dynamic digital experiences that put fans at the center of the action while scaling to new audiences around the world.”

The integration of ARound and Immersal technologies yields greater access and broader use cases of AR experiences through WebAR, enhancing the overall quality and ease of integration for in-stadium entertainment. Fans can interact with live events in real-time, participating in AR games, accessing real-time game content, and enjoying shared experiences with fellow attendees, all from their smartphones without the need for a standalone app. Brands and sponsors will also now be able to connect with audiences in innovative, meaningful ways, enhancing their marketing mix and creating new avenues for engagement.

“We’re committed to innovating and enhancing AR experiences at live events and our technology, combined with ARound’s exciting platform, will set a new benchmark in how fans interact with live sports and entertainment, offering them an engaging and memorable experience like never before,” Matias Koski, CEO, Immersal.

Top AI ML Insights: NICUs And AI For Babies

This groundbreaking partnership heralds a new era in fan engagement, offering sports teams, venues, and brands an unparalleled platform to connect with audiences. Combining ARound’s interactive fan experiences with Immersal’s precision technology, the stage is now set for a revolution in live entertainment.

Read: State Of AI In 2024 In The Top 5 Industries

 [To share your insights with us, please write to sghosh@martechseries.com

The post ARound and Immersal Team Up to Revolutionize Augmented Reality in Sports and Live Entertainment appeared first on AiThority.

]]>
WiMi Developed RPSSC Technology With Multiple Advantages in Hyperspectral Image Processing https://aithority.com/technology/wimi-developed-rpssc-technology-with-multiple-advantages-in-hyperspectral-image-processing/ Thu, 04 Jan 2024 08:04:55 +0000 https://aithority.com/?p=555555 WiMi Developed RPSSC Technology With Multiple Advantages in Hyperspectral Image Processing

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider,  announced that it developed RandomPatchSpatialSpectrumClassifier (RPSSC) technology to fully utilize the complementarity between spatial and spectral information. The R&D of WiMi’s RPSSC combines a 2D Gabor filter and a random patch convolution (GRPC) feature extraction method. First, the RPSSC uses principal component analysis […]

The post WiMi Developed RPSSC Technology With Multiple Advantages in Hyperspectral Image Processing appeared first on AiThority.

]]>
WiMi Developed RPSSC Technology With Multiple Advantages in Hyperspectral Image Processing

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider,  announced that it developed RandomPatchSpatialSpectrumClassifier (RPSSC) technology to fully utilize the complementarity between spatial and spectral information.

AIThority Predictions Series 2024 banner

The R&D of WiMi’s RPSSC combines a 2D Gabor filter and a random patch convolution (GRPC) feature extraction method. First, the RPSSC uses principal component analysis (PCA) and LDA algorithms to downscale the original hyperspectral image. The purpose of this step is to eliminate redundant spectral information while retaining the main information, increase the inter- and intra-class distance ratios, and prepare data for subsequent feature extraction and classification. On the dimensionalized image, RPSSC introduces a two-dimensional Gabor filter. Gabor filters are widely used in the field of computer vision to extract spatial structural features such as edges and textures of images. Through the Gabor filter, the RPSSC technology can capture the local texture and spatial information in the image, which lays the foundation for the subsequent feature extraction. Next, employed the GRPC method to the RPSSC, which takes Gabor features as input. Random patch convolution realizes the extraction of multilevel spectral features from an image by randomly selecting patches in the image and performing convolution operations on these patches. This step aims to synthesize spatial and spectral information, allowing the model to understand the features of the image more comprehensively. Finally, the RPSSC technology fuses the spatial features extracted from the GRPC with the multilevel spectral features. Through this fusion process, the model is able to synthesize the spectral information and local spatial structure information to provide a richer feature representation for image classification. Ultimately, RPSSC employs a support vector machine (SVM) classifier to classify the fused features to achieve accurate classification of hyperspectral images.

Top AI ML News: The New Version of Smart ID Engine Recognizes Korean 15% Better

GRPC feature extraction consists of multiple layers and each layer contains the following steps:

PCA: PCA is performed on randomly selected patches to extract spectral features.

Whitening: The extracted spectral features are whitened to reduce redundant information.

Random projection: The whitened features are projected to a lower dimensional space by random projection.

Convolutional feature extraction: Convolutional operation is performed in the reduced dimensional space to extract multilevel spectral features.

WiMi’s RPSSC technology has multiple technical advantages in realizing the comprehensive utilization of spectral and spatial features of hyperspectral images. It improves classification accuracy, reduces model complexity, and fully exploits the information of hyperspectral images to provide more effective solutions for practical applications. The technical advantages of WiMi’s RPSSC are as follows:

Simple structure and excellent performance: RPSSC adopts GRPC, which has a relatively simple structure, but shows excellent performance in experiments. This simple structure makes the model easier to understand and optimize, and reduces the deployment cost in real applications.

Fully utilizing spatial and spectral features: RPSSC fully utilizes spatial and spectral features in hyperspectral images by combining 2D Gabor filters and GRPC methods. This combined utilization not only improves the classification accuracy, but also reveals the importance of spatial structural features that are often neglected in traditional methods.

Good adaptability: The RPSSC technology performs well in overcoming the pepper noise and excessive smoothing phenomenon in hyperspectral image classification is applicable to a variety of real-world scenarios and still achieves high classification accuracy even with a limited number of training samples. This is important for dealing with irregular environments and incomplete data in practical applications.

Stacking of spatial and spectral features: RPSSC realizes the effective stacking of spatial and spectral features, which enables the model to understand hyperspectral images more comprehensively. This comprehensive utilization not only improves the classification accuracy, but also enhances the model’s grasp of the internal structure of the image, providing strong support for more detailed classification.

Applicable to limited training samples: RPSSC can still achieve high classification accuracy with limited training samples. This advantage is especially important in real-world applications because in some domains, obtaining large-scale labeled data can be difficult, and RPSSC’s high efficiency makes it suitable for these challenging scenarios.

AiThority SnapLogic Chief Scientist Reveals GenAI Predictions for 2024

Effectively overcoming over-smoothing: In hyperspectral image processing, over-smoothing often leads to loss of information and affects classification accuracy, which is overcome by RPSSC through the combined use of spatial and spectral information, improving the accuracy of image processing.

WiMi’s RPSSC has a wide range of applications in the field of hyperspectral image classification, and the RPSSC technology can be applied to hyperspectral remote sensing images collected by satellites and airplanes for land cover categorization, resource survey and environmental monitoring. For example, it can accurately classify farmland, forests, waters, etc., and realize the efficient management of natural resources. In the field of agriculture, RPSSC technology can be used for crop type classification, disease detection and soil analysis. By accurately classifying hyperspectral images, it can help farmers optimize agricultural production. Meanwhile, WiMi’s RPSSC technology can be used for environmental monitoring, including urban planning, water quality monitoring, and vegetation cover monitoring. Through the comprehensive analysis of hyperspectral images, water pollution and ecosystem changes can be better monitored.

WiMi’s future research directions include further optimization of the RPSSC algorithm to improve its computational efficiency and adapt to large-scale hyperspectral image data computation. Meanwhile, considering the important role of deep learning in the field of image processing, the fusion of RPSSC technology and deep learning may be a research direction in the future to further improve classification accuracy and the ability to handle complex scenes. For different fields and application scenarios, WiMi is committed to developing tailored RPSSC solutions to better meet the needs of different industries.

RPSSC technology marks an important breakthrough for WiMi in the field of hyperspectral image classification. By fully exploiting the spatial and spectral features in hyperspectral images, the RPSSC technology demonstrates outstanding performance and a wide range of potential application areas. While realizing more accurate classification, RPSSC technology provides a new way of thinking for solving the problems of model complexity and long training time that exist in traditional deep learning methods. WiMi’s RPSSC technology represents the cutting edge of hyperspectral image classification. The continuous development and improvement of RPSSC technology will bring more impetus to scientific and technological progress, application innovation, and social development.

Read More about AiThority News: How to Avoid AI Hallucinations From Becoming Costly Liabilities

 [To share your insights with us, please write to sghosh@martechseries.com

The post WiMi Developed RPSSC Technology With Multiple Advantages in Hyperspectral Image Processing appeared first on AiThority.

]]>
WiMi Built an Efficient, Blockchain-compatible Heterogeneous Computing Framework https://aithority.com/technology/wimi-built-an-efficient-blockchain-compatible-heterogeneous-computing-framework/ Thu, 28 Dec 2023 14:20:35 +0000 https://aithority.com/?p=554879 WiMi Built an Efficient, Blockchain-compatible Heterogeneous Computing Framework

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider,announced that by combining cloud servers, general-purpose computers, and FPGAs, WiMi has built a blockchain heterogeneous computing framework named “HeteroBlock Framework”. The framework is designed to provide users with efficient, flexible, and reliable blockchain computing services to meet the growing computing demand. Recommended AI […]

The post WiMi Built an Efficient, Blockchain-compatible Heterogeneous Computing Framework appeared first on AiThority.

]]>
WiMi Built an Efficient, Blockchain-compatible Heterogeneous Computing Framework

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider,announced that by combining cloud servers, general-purpose computers, and FPGAs, WiMi has built a blockchain heterogeneous computing framework named “HeteroBlock Framework”. The framework is designed to provide users with efficient, flexible, and reliable blockchain computing services to meet the growing computing demand.

AIThority Predictions Series 2024 banner

Recommended AI News: Cloudflare Equips Organizations with the Zero Trust Security They Need to Safely Use Generative AI

The core of the HeteroBlock framework lies in its ability to handle heterogeneous nodes. In the traditional isomorphic blockchain computing method, all nodes have the same computing capability, which limits the performance and scalability of the blockchain to a certain extent. In contrast, the HeteroBlock framework makes full use of the advantages of various nodes by assigning different computational tasks to heterogeneous nodes, which realizes more efficient and economical computational capabilities. This not only improves the utilization of computing resources but also brings more possibilities for blockchain applications. With the support of the HeteroBlock framework, blockchain can be better adapted to a variety of scenarios, ranging from financial transactions to supply chain management, and from digital identity verification to IoT device interaction. The HeteroBlock framework provides a new way of computing for general-purpose computing nodes and heterogeneous nodes, combining with related communication protocols and software algorithms to improve the structure of the blockchain network, and providing a powerful infrastructure for the deployment of smart contracts and the execution of computational tasks. HeteroBlock framework, by combining it with a variety of communication protocols and software algorithms, provides blockchain applications with more flexible and powerful support. support. In addition, the framework reduces computation costs by optimizing the allocation and execution of computation tasks, making it possible to apply blockchain technology in more fields.

The HeteroBlock framework consists of four modules: through the data control module to manage the data to be sent; through the serial port sending module, for the completion of the data sending function; through the serial port receiving module, for the realization of the data receiving function; through the key data module for the completion of the data flag bit prompting function. The framework realizes the blockchain-related functions through the interaction of each signal in the time domain. In the HeteroBlock framework, each local computer, cloud server, and FPGA is considered as a blockchain node. These nodes work together to accelerate the computation of blockchain behaviors to form a complete blockchain system. Among them, FPGA, as a heterogeneous computing node, is capable of performing tasks such as asymmetric computing, hash computing, block generation, and consensus computing. When a transaction occurs, the account that initiated the transaction needs to be digitally signed using a private key, which can be verified by the corresponding public key. Once the node is successfully verified, the transaction data is recorded and sent to the local computer for storage. Before each block is packed, the consensus algorithm calculates the node that receives the block packing interest. The node then synchronizes the transaction information on each node during that period. The transaction information is hashed by the algorithm to derive a hash value, and the hash value of the previous block is retained to maintain the serial structure of the blockchain. Finally, the packed blocks are synchronized to other nodes through the communication network and the packed block feature values are sent to the local computer for display as well as storage through the serial port.

Recommended AI News: Cloudflare’s R2 Is the Infrastructure Powering Leading AI Companies

WiMi’s HeteroBlock framework revolutionizes blockchain computing. By combining relevant communication protocols and software algorithms, the framework improves the blockchain network structure and provides strong support for general-purpose computing nodes and heterogeneous nodes. This not only improves the performance and scalability of the blockchain but also provides more possibilities for blockchain applications in various scenarios. With the continuous progress of technology and the expansion of application scenarios, the HeteroBlock framework is expected to become an important cornerstone of future blockchain computing, leading blockchain technology to a broader future.

Recommended AI News: Cloudflare Releasing New Tools to Help Manage Gen AI Security Risks

[To share your insights with us, please write to sghosh@martechseries.com]

The post WiMi Built an Efficient, Blockchain-compatible Heterogeneous Computing Framework appeared first on AiThority.

]]>
WiMi Announced an Efficient Hologram Calculation Using the Wavefront Recording Plan https://aithority.com/technology/wimi-announced-an-efficient-hologram-calculation-using-the-wavefront-recording-plan/ Tue, 26 Dec 2023 09:18:45 +0000 https://aithority.com/?p=554366 WiMi Announced an Efficient Hologram Calculation Using the Wavefront Recording Plan

WiMi Hologram Cloud (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider,announced that it developed an efficient hologram calculation using the wavefront recording plane method, which combines the principles of light wave interference and diffraction. The method determines the effective visible region by analyzing the diffraction characteristics of an object point […]

The post WiMi Announced an Efficient Hologram Calculation Using the Wavefront Recording Plan appeared first on AiThority.

]]>
WiMi Announced an Efficient Hologram Calculation Using the Wavefront Recording Plan

WiMi Hologram Cloud (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider,announced that it developed an efficient hologram calculation using the wavefront recording plane method, which combines the principles of light wave interference and diffraction. The method determines the effective visible region by analyzing the diffraction characteristics of an object point on a three-dimensional object, and based on this, identifies the effective hologram size of the object point, thus realizing the rapid generation of holograms.

AIThority Predictions Series 2024 banner

Recommended AI News: Vitol Enhances Compliance Monitoring With the Implementation of Behavox Quantum

WiMi’s hologram calculation method using the wavefront recording plane method mainly consists of four key steps, each of which is optimized for the hologram generation process to improve the calculation speed and image quality. Its implementation requires the integrated use of mathematical models, optical theory, and computer algorithms.

Sub-hologram generation: According to the Fresnel diffraction theory, for each point of the 3D object, the corresponding diffraction field is calculated based on its spatial coordinates and wavelength. This can be accelerated by numerical computational methods such as Fast Fourier Transform (FFT).

Optimal segmentation: After obtaining the preliminary results of the sub-holograms, an optimization algorithm is used to optimally segment each sub-hologram according to the distribution of the object points and the diffraction characteristics to ensure maximum diffraction efficiency and image clarity.

Accurate diffraction calculation at the wavefront recording plane (WRP): Using accurate numerical calculation methods, such as the finite difference method (FDTD) or other accurate numerical simulation methods, the accurate diffraction results from the object point to the WRP are calculated, and the diffraction information from different points are superimposed to obtain the total complex amplitude information on the WRP.

Diffracted light field calculation: Based on the total complex amplitude information on the WRP, the diffraction theory and optical propagation equation are utilized to calculate the diffracted light field distribution from the WRP to the holographic plane, and the final hologram is obtained accordingly. This step requires the use of optical calculations and numerical simulation methods to accurately calculate the propagation and diffraction of the light field.

At the same time, the method utilizes deep learning technology to improve the accuracy and efficiency of the algorithm by learning and training on a large amount of hologram data, which further accelerates the calculation speed of the hologram. In addition, in practice, hardware devices need to be optimized, combining high-performance computing platforms and customized optical components to improve the computational efficiency and display quality.

WiMi’s hologram calculation method using the wavefront recording plane method is of great significance and far-reaching value as a new holographic display technology. The method realizes the rapid generation of holograms by combining the principle of wavefront precision diffraction and an efficient calculation algorithm. It breaks through many challenges faced by the traditional holographic display technology, such as narrow field of view, serious speckle noise, and slow computation speed, and brings an important breakthrough for the development of holographic display technology. The method achieves high efficiency and versatility in the hologram calculation process by optimizing the algorithm and pre-calculated components. This makes the holographic display technology more widely applicable in practical applications and provides users with a more convenient and fast holographic display experience.

Recommended AI News: Colleen AI Launches Post-Resident Recovery Service to Maximize Returns of Unpaid Balances

Through accurate wavefront diffraction calculation and light field propagation simulation, this method can realize higher-quality hologram generation and present more realistic and lifelike holograms for users. This will bring broader development space for the application scenarios of holographic display technology, such as providing a better visual experience in the fields of education, medical care, entertainment, and so on.

WiMi’s hologram calculation method using the wavefront recording plane method is expected to bring more new possibilities and opportunities for the holographic display technology industry. It is not only an important breakthrough in the field of holographic display technology, but also an important exploration of the future development direction of holographic display technology, WiMi will continue to explore and innovate and is committed to promoting the further development of holographic display technology, providing users with smarter and more convenient holographic display solutions, and better holographic display experience, and helping holographic display technology to be widely used and developed around the world.

Top AI ML News: LILT Announces Enterprise AI Controls and In-App Multimodal Translation

[To share your insights with us, please write to sghosh@martechseries.com]

The post WiMi Announced an Efficient Hologram Calculation Using the Wavefront Recording Plan appeared first on AiThority.

]]>
WiMi Announced the Optimization of Artificial Neural Networks Using Group Intelligence Algorithm https://aithority.com/technology/wimi-announced-the-optimization-of-artificial-neural-networks-using-group-intelligence-algorithm/ Wed, 20 Dec 2023 13:53:27 +0000 https://aithority.com/?p=553717 WiMi Announced the Optimization of Artificial Neural Networks Using Group Intelligence Algorithm

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider,announced that it adopted a group intelligence algorithm to optimize the artificial neural network. This algorithm facilitates the process of determining the network structure and the training of the artificial neural network. The group intelligence algorithm is better at finding the optimal connection weights […]

The post WiMi Announced the Optimization of Artificial Neural Networks Using Group Intelligence Algorithm appeared first on AiThority.

]]>
WiMi Announced the Optimization of Artificial Neural Networks Using Group Intelligence Algorithm

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider,announced that it adopted a group intelligence algorithm to optimize the artificial neural network. This algorithm facilitates the process of determining the network structure and the training of the artificial neural network. The group intelligence algorithm is better at finding the optimal connection weights and biases during training compared to traditional algorithms.

AIThority Predictions Series 2024 banner

Read : AI And Cloud- The Perfect Match

The group intelligence algorithm is a meta-heuristic optimization algorithm inspired by observing the behavioral patterns of groups of animals and insects as their environment change. These algorithms use the simple collective behavior of certain groups of biological organisms to generate group intelligence. This allows group intelligence algorithms to solve complex optimization problems using the interaction between groups of artificial search agents and the environment. Group intelligence algorithms can solve different types of optimization problems, including continuous, discrete or multi-objective optimization problems. Therefore, they have a wide range of applications in various fields.

WiMi used a group intelligence algorithm to optimize artificial neural networks to improve the generalization ability of artificial neural networks by optimizing the connection weights, weights, and biases or network structure. The following are the steps of the algorithm:

Determine the structure and parameters of the neural network: Setting and adjusting the structure and parameters of the neural network according to the specific problem, such as the number of layers, the number of neurons in each layer, the activation of functions and so on.

Prepare the training dataset: Selecting an appropriate training dataset for training the neural network.

Initialize dataset: Randomly generating a set of solutions as potential solutions to the problem, representing the initial dataset. In the context of neural network optimization, this can include randomly generating a set of initial weights and bias values as initial solutions for the neural network.

Calculate the fitness: A fitness function is defined based on the nature of the problem and is used to evaluate the quality of each solution. In the context of neural network optimization, this can include calculating the error between the output of the network and the actual label as the fitness.

Search: Updating each solution in the population according to a certain update rule (e.g., an update rule based on modeling the movement step of swarming organisms, such as PSO, AFSA & SFLA) or an update rule set according to some algorithmic mechanism (e.g., ACO). The fitness and stochastic factors of each solution are considered in the update to improve the search efficiency.

Read: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

Termination conditions: Ensuring that the process satisfies certain termination conditions, such as reaching a preset maximum number of times or finding a satisfying solution.

Testing and evaluation: Testing and evaluating the optimized neural network using a test dataset to verify its performance and generalization ability.

The group intelligence optimization algorithm is a probabilistic stochastic search method, so the optimization result obtained is not necessarily the optimal solution, but usually a better solution. In addition, WiMi will incorporate other techniques such as feature selection and data pre-processing to further improve the performance and generalization of the neural network.

Read: AI and Machine Learning Are Changing Business Forever

[To share your insights with us, please write to sghosh@martechseries.com]

The post WiMi Announced the Optimization of Artificial Neural Networks Using Group Intelligence Algorithm appeared first on AiThority.

]]>
WiMi Announces Hybrid Recurrent Neural Network Architecture-based Intention Recognition https://aithority.com/technology/wimi-announces-hybrid-recurrent-neural-network-architecture-based-intention-recognition/ Tue, 19 Dec 2023 08:49:12 +0000 https://aithority.com/?p=553295 WiMi announces Hybrid Recurrent Neural Network Architecture-based Intention Recognition

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider,announced that it proposed hybrid recurrent neural network architecture-based human-robot collaboration intent recognition. Hybrid recurrent neural network architecture is a model that combines recurrent neural network (RNN) and convolutional neural network (CNN). RNN is a neural network suitable for modeling and sequential data processing, […]

The post WiMi Announces Hybrid Recurrent Neural Network Architecture-based Intention Recognition appeared first on AiThority.

]]>
WiMi announces Hybrid Recurrent Neural Network Architecture-based Intention Recognition

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider,announced that it proposed hybrid recurrent neural network architecture-based human-robot collaboration intent recognition. Hybrid recurrent neural network architecture is a model that combines recurrent neural network (RNN) and convolutional neural network (CNN). RNN is a neural network suitable for modeling and sequential data processing, which can efficiently capture temporal information and contextual relationships in the data through recurrent connections and hidden state updating, it can effectively capture temporal information and contextual relationships in sequence data. CNN can effectively extract data features. Hybrid recurrent neural network combines the advantages of RNN and CNN, which can better capture sequence information and local features, and can better handle intention recognition for human-robot collaboration.

AIThority Predictions Series 2024 banner

Recommended AI News: Veeam Expands Cyber Protection Capabilities with New Veeam Data Platform 23H2 Update

In hybrid recurrent neural network architecture, the input data is first subjected to feature extraction by CNN, then temporal modeling by recurrent layer, and then mapping the features to the intent by a fully connected layer. During the training process, the backpropagation algorithm is used to optimize the model parameters to improve the accuracy of intent recognition.

WiMi’s hybrid recurrent neural network architecture-based human-robot collaboration intent recognition mainly consists of:

Input layer: The input layer receives raw data from the human-robot collaborative scenario, such as speech, images, or text. Different types of data need to undergo appropriate pre-processing and feature extraction operations to better represent the information.

Loop layer: The loop layer utilizes RNN to capture the sequence information of the input data. Commonly used RNN units include long short-term memory (LSTM) and gated recurrent unit (GRU). Through recurrent connections, the RNN can model the input sequence and pass the historical information to the subsequent layers.

Convolutional layer: The convolutional layer utilizes CNN to extract local features of the input data. Through convolution operation and pooling operation, CNN can effectively capture spatial and temporal correlations in the input data. The convolutional layer is usually used to process image data or spectral representation of speech data.

Fusion layer: The fusion layer fuses the outputs of the recurrent and convolutional layers to obtain more comprehensive and enriched features, and the fused features are fed into the next layer.

Output layer: The output layer is designed according to the specific task, for example, the classification task can use a fully connected layer and softmax function for multi-category classification. The result of the output layer can represent the category or probability distribution of the human-robot collaborative intent.

Using hybrid recurrent neural network architecture for human-robot collaboration intention recognition can greatly improve the efficiency and quality of human-robot collaboration. Human-robot collaboration intention recognition is an important research area that can help robots to better understand human intentions and goals, thus enabling more intelligent and efficient human-robot collaboration. By accurately understanding human intentions, robots can better respond to and assist humans in accomplishing tasks, thus improving work efficiency. In addition, human-robot collaboration intent recognition can improve the user experience of human-robot interaction. If robots can accurately recognize the human’s intention and respond accordingly, the user will feel more natural and comfortable, thus enhancing the user’s trust and satisfaction with robots. Human-robot collaborative intent recognition can be applied in various fields, such as smart home, smart office, and smart healthcare, etc., to bring convenience and benefits to people’s lives and work.

Recommended AI News New EverCommerce Report Reveals Service-Based Small Businesses are Resilient

In the field of hybrid recurrent neural network architecture-based human-robot collaboration intention recognition, there are other research directions that deserve further exploration. Current human-robot collaboration intention recognition mainly relies on text data, but actual human-robot interaction often involves multi-modal information, such as speech, image, video, etc. WiMi will try to fuse multi-modal information into a hybrid recurrent neural network architecture-based human-robot collaboration intention recognition. In the future, WiMi will try to fuse multi-modal information into a hybrid recurrent neural network and utilize migration learning to enhance human-robot collaborative intent recognition, and continuously expand the application scope of human-robot collaborative intent recognition through further research and exploration.

Read 10 AI In Manufacturing Trends To Look Out For In 2024

[To share your insights with us, please write to sghosh@martechseries.com]

The post WiMi Announces Hybrid Recurrent Neural Network Architecture-based Intention Recognition appeared first on AiThority.

]]>
Digital Twin Consortium Signs Liaison with Open Industry 4.0 Alliance https://aithority.com/machine-learning/digital-twin-consortium-signs-liaison-with-open-industry-4-0-alliance/ Mon, 18 Dec 2023 13:15:52 +0000 https://aithority.com/?p=553176 Digital Twin Consortium Signs Liaison with Open Industry 4.0 Alliance

Further advancing the use of digital twins in Industry 4.0 via open standards The Digital Twin Consortium (DTC) announced a liaison agreement with the Open Industry 4.0 Alliance. The Open Industry 4.0 Alliance joined the DTC to not just exchange information but also to bring digitalization and collaboration to the next level. Recommended : AiThority Interview with Gary […]

The post Digital Twin Consortium Signs Liaison with Open Industry 4.0 Alliance appeared first on AiThority.

]]>
Digital Twin Consortium Signs Liaison with Open Industry 4.0 Alliance

Further advancing the use of digital twins in Industry 4.0 via open standards

The Digital Twin Consortium (DTC) announced a liaison agreement with the Open Industry 4.0 Alliance. The Open Industry 4.0 Alliance joined the DTC to not just exchange information but also to bring digitalization and collaboration to the next level.

AIThority Predictions Series 2024 banner

Recommended : AiThority Interview with Gary Kotovets, Chief Data and Analytics Officer at Dun & Bradstreet

The Open Industry 4.0 Alliance functions as a collaborative consortium comprising of prominent industrial companies actively involved in deploying cross-vendor Industry 4.0 solutions and services for manufacturing facilities and automated warehouses. Within industry and technology working groups, subject matter experts conceive practical scenarios and put them into practice using the Open Industry 4.0 Alliance reference architecture. These solutions, alongside detailed implementation instructions, are disseminated within the community and made accessible to parties beyond the Alliance.

“We are excited about working with the Open Industry 4.0 Alliance,” said Dan Isaacs, GM & CTO of DTC. “We look forward to helping manufacturers and solutions providers further the use of digital twins in smart factories, oil & gas, pharma, and others based on Industry 4.0 and key open industry standards.”

“The collaboration between the DTC and the Open Industry 4.0 Alliance aims to drive the alignment of technology components and other elements to ensure interoperability,” says Ricardo Dunkel, Technical Director at the Open Industry 4.0 Alliance. “Together we are working on the standardization and integration of technologies in vertical use cases, proof-of-concepts and Value Innovation Platforms (VIP). This collaborative partnership will be strengthened through the exchange of information, regular consultations and joint events to drive digitalization and promote collaboration.”

Recommended : AiThority Interview with Jenni Troutman, Director, Products and Services at AWS Training and Certification

The two groups have agreed to the following:

  • Realizing interoperability by harmonizing technology components and other elements
  • Aligning work in Digital Twin Consortium Capabilities and Technology for adoption within vertical domains through proof of value projects and use cases, including:
    • Composable and Architectural Frameworks,
    • Advanced Capabilities and Technology showcases
    • Security and Trustworthiness applications
    • Conceptual, informational, structural, and behavioral models
    • Enabling technologies such as AR, VR, AI, and other advancements
    • Case study development from initial concept through operational analysis

The DTC and Open Industry 4.0 Alliance will exchange information through regular consultations, seminars, and training development vehicles.

Latest AI Interviews: AiThority Interview with Dr. Karin Kimbrough, Chief Economist at LinkedIn

[To share your insights with us, please write to sghosh@martechseries.com]

The post Digital Twin Consortium Signs Liaison with Open Industry 4.0 Alliance appeared first on AiThority.

]]>
Wearable AR Deployed in Highly Secure Corporate Environments:Research by AR for Enterprise Alliance https://aithority.com/technology/wearable-ar-deployed-in-highly-secure-corporate-environmentsresearch-by-ar-for-enterprise-alliance/ Mon, 18 Dec 2023 13:01:52 +0000 https://aithority.com/?p=553137 Wearable AR Deployed in Highly Secure Corporate Environments:Research by AR for Enterprise Alliance

The Augmented Reality for Enterprise Alliance (AREA) published a new research report entitled Deployment of Wearable AR in Highly Secure Corporate Environments. This report investigates barriers to industry adoption of AR related to cybersecurity, with application-level authentication being the most critical. Recommended : AiThority Interview with Jenni Troutman, Director, Products and Services at AWS Training and Certification “Many organizations […]

The post Wearable AR Deployed in Highly Secure Corporate Environments:Research by AR for Enterprise Alliance appeared first on AiThority.

]]>
Wearable AR Deployed in Highly Secure Corporate Environments:Research by AR for Enterprise Alliance

The Augmented Reality for Enterprise Alliance (AREA) published a new research report entitled Deployment of Wearable AR in Highly Secure Corporate Environments. This report investigates barriers to industry adoption of AR related to cybersecurity, with application-level authentication being the most critical.

AIThority Predictions Series 2024 banner

Recommended : AiThority Interview with Jenni Troutman, Director, Products and Services at AWS Training and Certification

“Many organizations are rightly concerned about cybersecurity threats and forbid the use of unsecured devices,” said Mark Sage, Executive Director of AREA. “The industry needs to integrate AR hardware and software, including AR applications, with existing enterprise infrastructure while ensuring proper access controls are in place, and that, if an individual device is lost or stolen, no information is compromised.”

The research addresses securing AR content and data at the application layer for multi-user devices. Typically, only one person at a time will use wearable and hand-held XR devices; the sessions must be authenticated, with content and generated artifacts removed once they have ended. Organizations must encrypt simulated sensitive information at rest, in transit to a device, and from the device upon logout or closing of the application.

Recommended : AiThority Interview with Gary Kotovets, Chief Data and Analytics Officer at Dun & Bradstreet

This research will demonstrate an implementation of application-level authentication in the Unity development framework, the most widely adopted and supported application framework for head-mounted augmented reality devices. The outcome of this research provides a design pattern that organizations can apply in sensitive corporate environments, with a detailed discussion on additional cybersecurity considerations. The research also includes a Unity code that only the AREA members can access.

Please view an executive summary of the research report on the Deployment of Wearable AR in Highly Secure Corporate Environments from the AREA website. Please also consider the website’s executive summaries of other AREA resources and enterprise guidance.

Latest AI Interviews: AiThority Interview with Dr. Karin Kimbrough, Chief Economist at LinkedIn

[To share your insights with us, please write to sghosh@martechseries.com]

The post Wearable AR Deployed in Highly Secure Corporate Environments:Research by AR for Enterprise Alliance appeared first on AiThority.

]]>
WiMi Announced Asymmetric Spectral Network Algorithm https://aithority.com/technology/wimi-announced-asymmetric-spectral-network-algorithm/ Sun, 17 Dec 2023 15:51:42 +0000 https://aithority.com/?p=552976 WiMi Announced Asymmetric Spectral Network Algorithm

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider announced that its R&D team proposed an asymmetric spectral network algorithm. The algorithm employs asymmetric coordinate spectral spatial feature fusion to provide a novel, end-to-end feature learning method for hyperspectral image classification tasks. The algorithm’s adaptive feature fusion method is capable of extracting […]

The post WiMi Announced Asymmetric Spectral Network Algorithm appeared first on AiThority.

]]>
WiMi Announced Asymmetric Spectral Network Algorithm

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider announced that its R&D team proposed an asymmetric spectral network algorithm. The algorithm employs asymmetric coordinate spectral spatial feature fusion to provide a novel, end-to-end feature learning method for hyperspectral image classification tasks. The algorithm’s adaptive feature fusion method is capable of extracting discriminative spectral spatial features, and unlike common feature fusion methods, the algorithm is more adaptable to multi-hop connectivity tasks while eliminating the need for manual parameterization.

AIThority Predictions Series 2024 banner

Recommended: Predictions Series 2022: AiThority Interview with Peter Stone, Executive Director at Sony AI

WiMi’s asymmetric spectral network algorithm solves the spectral noise problem through adaptive feature fusion. The algorithm allows the network to adaptively fuse multiple pieces of information to extract discriminative spectral-spatial features. Unlike traditional feature fusion, this algorithm does not require manual parameterization and is adapted to multi-hop connectivity tasks. This adaptivity helps to efficiently handle complex spectral data and improves the algorithm’s ability to recognize real signals.

In terms of the band correlation problem, the asymmetric spectral network algorithm introduces a coordinate and strip pooling module. Coordinates are used to obtain accurate coordinate and channel information, which helps the network to better understand the spatial structure of the data. Meanwhile, the strip pooling module is used to avoid introducing irrelevant information. The combination of these two techniques makes the network more adaptive and better able to handle the complex band correlations present in hyperspectral images.

WiMi’s asymmetric spectral network algorithm focuses on simplicity, which is to reduce the model complexity with less training time. The algorithm successfully reduces the complexity of the algorithm through an asymmetric learning model and adaptive feature fusion while maintaining high classification performance. This makes the algorithm more suitable for practical application scenarios and provides higher efficiency for hyperspectral image classification tasks.

WiMi’s asymmetric spectral network algorithm focuses not only on static scenes but also on dynamic scenes. Its end-to-end feature learning approach and adaptive feature fusion method enable the algorithm to better adapt to the ever-changing information in hyperspectral images, thus improving the classification accuracy in dynamic scenes. It effectively overcomes the technical challenges in hyperspectral image classification and brings a more efficient and accurate solution.

In addition, it introduces the key technology of asymmetric coordinate spectral spatial feature fusion. The algorithm learns the feature representation of hyperspectral images end-to-end through an asymmetric learning model. Compared to traditional methods, this asymmetric learning approach better captures the complex relationships between pixels, enabling the model to more accurately understand the non-uniformity of the spatial distribution, thus improving the classification accuracy.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

The successful development of WiMi’s asymmetric spectral network algorithm provides greater feasibility for real-world application scenarios. By reducing model complexity and improving training and inference efficiency, the algorithm can be better adapted to real-world requirements, especially in decision-making and monitoring scenarios that require fast response, demonstrating significant advantages. The introduction of the algorithm will drive hyperspectral image classification technology into a new stage of development. This is expected to stimulate more research and innovation and drive the whole field forward.

WiMi’s asymmetric spectral network algorithm provides a more accurate and efficient solution for hyperspectral data analysis and processing in the fields of crop detection and geological exploration. In the future, with the further optimization of the algorithm, it will be applied to a wider range of fields, such as environmental monitoring, weather prediction, etc., providing more powerful support for various industries. asymmetric spectral network algorithm will accelerate the deep integration of scientific research and industry.

Considering the prevalence of dynamic scenes in hyperspectral image classification tasks, WiMi will continue to optimize the adaptability of the asymmetric spectral network algorithm. By further improving the end-to-end learning approach and adaptive feature fusion method, the algorithm is better adapted to rapidly changing environments and improves classification accuracy in dynamic scenes. WiMi’s asymmetric spectral network algorithm opens up new horizons in the field of hyperspectral image classification, and will continue to play an important role in scientific research, industrial applications, and technological innovation.

Recommended: Predictions Series 2022: AiThority Interview with David Low, CMO at Talkwalker

[To share your insights with us, please write to sghosh@martechseries.com]

The post WiMi Announced Asymmetric Spectral Network Algorithm appeared first on AiThority.

]]>