Decision Tree Learning Archives - AiThority https://aithority.com/category/machine-learning/decision-tree-learning/ Artificial Intelligence | News | Insights | AiThority Mon, 26 Dec 2022 13:10:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png Decision Tree Learning Archives - AiThority https://aithority.com/category/machine-learning/decision-tree-learning/ 32 32 Snap Researchers Introduce NeROIC for Object Capture and Rendering Applications https://aithority.com/machine-learning/pattern-recognition/neroic-object-capture-and-rendering-applications/ Mon, 26 Dec 2022 13:10:23 +0000 https://aithority.com/?p=474115 Snap Researchers Introduce NeROIC for Object Capture and Rendering Applications

AI researchers at Snap have collaborated with the University of Southern California to introduce a path-breaking neural application called NeROIC. NeROIC stands for Neural Rendering of Objects from Online Image Collection. This machine learning-trained model acquires and analyzes object representations in 2D and 3D frames to enable various types of object-centric rendering applications. Let’s examine […]

The post Snap Researchers Introduce NeROIC for Object Capture and Rendering Applications appeared first on AiThority.

]]>
Snap Researchers Introduce NeROIC for Object Capture and Rendering Applications

AI researchers at Snap have collaborated with the University of Southern California to introduce a path-breaking neural application called NeROIC.

NeROIC stands for Neural Rendering of Objects from Online Image Collection. This machine learning-trained model acquires and analyzes object representations in 2D and 3D frames to enable various types of object-centric rendering applications. Let’s examine NeROIC and how its application can simplify the image acquisition and rendition processes.

What is NeROIC?

NeROIC is an innovative two-stage model to acquire object representations from online image collections.

Stage 1:

The first stage model captures images of an object from various angles as ‘input.’ Then, with the help of a camera, different poses of the object are created and trained using Neural Radiance Fields-based networks (NeRF). Using density functions, the model then computes the surface normal across normal extraction layer to come up with image in a neural rendition plane.

The NeRF model is optimized in various conditions and then decoupled in stage 2 to improve rendition and training capabilities of the novel view.

Read More on Neural Networks

Stage 2:

The image rendition is done by applying lighting conditions and synthesizing the novel views in different environment and lighting conditions.

This process is explained in the paper here:

Object Capture Results from Online Images
Object Capture Results from Online Images

Why Use NeROIC for Image Rendition?

The internet is flooded with images and videos. Website creators selling products have a hard time describing the features and dimensions. NeROIC model will solve the various problems that image creators and website owners face in describing and rendering similar items in different environment and backgrounds.

Overview

NeROIC uses NeRF for novel view synthesis, enabling a superior object capturing and object rendition approach.

The model utilizes the PyTorch framework, trained on 4 NVIDIA V100s with the batch size of 4096. The model is then tested on a single NVIDIA V100.

By building NeROIC, AI researchers have demonstrated the role of neural networks and novel view synthesis in image capture, composition, and relighting approaches for multi-level image rendition using AI ML.  This opens new avenues for AI ML tools specifically built for online image collections that are cropped or captured from different environment and backgrounds.

To access GitHub code for NeROIC project, click here

Project authors:

ZHENGFEI KUANG∗, University of Southern California, USA
KYLE OLSZEWSKI, Snap Inc., USA
MENGLEI CHAI, Snap Inc., USA
ZENG HUANG, Snap Inc., USA
PANOS ACHLIOPTAS, Snap Inc., USA
SERGEY TULYAKOV, Snap Inc., USA

 

The post Snap Researchers Introduce NeROIC for Object Capture and Rendering Applications appeared first on AiThority.

]]>
How Companies Are Using AI to Alleviate Labor Shortages https://aithority.com/machine-learning/decision-tree-learning/how-companies-are-using-ai-to-alleviate-labor-shortages/ Tue, 29 Nov 2022 14:30:05 +0000 https://aithority.com/?p=434122 How Companies Are Using AI to Alleviate Labor Shortages

Three of every four companies have reported talent or labor shortages and difficulty hiring–a 16-year high. Profound social, economic and demographic changes have created unmet demands for workers in industries ranging from hospitality to logistics to healthcare. Executives across sectors are struggling to attract and retain talent and it’s likely that labor shortages will remain […]

The post How Companies Are Using AI to Alleviate Labor Shortages appeared first on AiThority.

]]>
How Companies Are Using AI to Alleviate Labor Shortages

Three of every four companies have reported talent or labor shortages and difficulty hiring–a 16-year high. Profound social, economic and demographic changes have created unmet demands for workers in industries ranging from hospitality to logistics to healthcare. Executives across sectors are struggling to attract and retain talent and it’s likely that labor shortages will remain a critical issue for many organizations moving forward.

However, the rapid advances in artificial intelligence (AI) have the potential to significantly disrupt labor markets. Leading organizations are using AI technologies to reduce the impact of labor shortages and improve their competitive position, while also saving on costs.

Here’s how they’re putting AI and big data to use:

They’re improving retention

Departing employees most often report low pay and insufficient benefit packages as the primary reasons for their resignation and labor shortages. Some say a non-supportive and unpleasant work environment is the reason their employees quit, creating labor shortages. But every company is different, which is why leading organizations are pulling internal data and using technology to determine the exact reasons why their talent is leaving so they can make data-driven decisions to solve their talent issue.

You may be thinking: What kind of tech are they using?

Machine Learning algorithms are available, for instance, to determine when remote employees are most productive. Employers who use these tools can build schedules that tap into remote workers’ best hours while also accounting for the needs of employees’ personal lives. The result: Higher employee satisfaction.

Other employers turn to AI tools to reduce mind-numbing routine tasks. Customer-service representatives, for example, are far happier when they devote their days to meaningful relationships with customers rather than the tedious, repetitive jobs that can be handled by AI-based chatbots. Employers demonstrate respect for their workers when they automate the tiresome tasks. The effects ripple through the organization and happy employees build relationships that create loyal, profitable customers.

They improve productivity for the long-term

When existing workers become more efficient, employers need to add fewer employees to handle a growing volume of business. AI-powered tools provide dynamic productivity gains across a wide variety of industries.

For example, look at the productivity gains in field service knowledge management — the sector that ensures that service workers have the training, experience and analytics tools they need to fix something right the first time, whether they’re repairing an oven in a suburban home or a big piece of industrial machinery at a remote location.

AI-based tools can provide historic data on breakdowns and their causes, determine the best potential solutions and track down the necessary parts. Some organizations even analyze their historic data to determine which field service technicians are best at certain types of assignments, increasing the likelihood that a service call will be successful the first time.

In Rock Hill, S.C., 3D Systems uses service intelligence, which is technology (powered by AI) that mines and analyzes traditional service data and institutional knowledge from a company’s highest-performing employees. Service intelligence helps 3D Systems sort through service data, diagnose a problem, and suggest solutions that can be implemented remotely. As a result, the manufacturer reduces technician travel time, cuts parts consumption and improves the speed of repairs, subsequently reducing costs.

This is not a short-term solution or band-aid. But rather a way to improve productivity across the workforce in a meaningful way. Similar stories of productivity gains are being reported across numerous industries, particularly those most pressed by labor shortages.

They upskill workers quickly

Organizations committed to the development of the skills of their workers can recruit from a wider pool of candidates — not just experienced workers, but eager newcomers as well.

AI-based tools can take practical information gathered from veteran workers, combine that with historic data and mix in information from clients and vendors to create training materials that support new workers far more effectively than traditional onboarding methods. AI tools also provide fast and invaluable data on the progress of a new employe, including any skills that may need additional attention.

Newcomers are less likely to become frustrated — and less likely to leave for other jobs — when they feel the satisfaction of improved skills. And, the promise of training can be a powerful draw when employers are recruiting.

Good upskilling also reduces costs. A recent survey in one industry, for example, found that performers in the bottom quarter of the workforce cost organizations 84 percent more than top performers.

They retain institutional knowledge

Every day, 10,000 people in the United States reach age 65, the traditional retirement age. A lot of institutional knowledge and company expertise is walking out the door with those workers.

To transfer institutional knowledge, companies and other employers are introducing phased-retirement and part-time work opportunities that allow younger workers to continue to tap the experience of retirement-age employees.

Recommended News: NVIDIA Raises the Standard of Low Code DevOps with the NVIDIA AI Enterprise 2.1

But leading organizations are taking things further as they gear up AI-based initiatives intended to capture and “save” the knowledge that retiring workers have accumulated throughout their career. They’re then applying machine-reasoning to that data to help make decisions long after the retired worker has split ways.

AI ML Insights:

AI Startups Thrive When Times Get Tough: Why I’m Doubling Down on AI Investments

AI-based tools are making a substantial contribution to employee retention, training and productivity — the key elements in mitigating the effects of labor shortages. There’s no question AI will play an even greater role as economic challenges grow and the need to attract and retain workers escalates. As we enter into a period of economic uncertainty, companies should take advantage of the opportunity to streamline operations and cover for employees leaving, simply by leveraging AI to augment their human intelligence.

[To share your insights with us, please write to sghosh@martechseries.com]

The post How Companies Are Using AI to Alleviate Labor Shortages appeared first on AiThority.

]]>
The Uncertainty Bias in AI and How to Tackle it https://aithority.com/machine-learning/the-uncertainty-bias-in-ai-and-how-to-tackle-it/ Sat, 08 Oct 2022 09:00:23 +0000 https://aithority.com/?p=454556 Bias in AI

Bias in AI is a formidable topic for any data scientist. If you are reading this, you probably know that artificial intelligence systems have a bias problem. While true, that thought is misleading. AI systems themselves inherently have no bias. However, if it is using biased data, or the people running the system do not […]

The post The Uncertainty Bias in AI and How to Tackle it appeared first on AiThority.

]]>
Bias in AI

Bias in AI is a formidable topic for any data scientist. If you are reading this, you probably know that artificial intelligence systems have a bias problem. While true, that thought is misleading. AI systems themselves inherently have no bias. However, if it is using biased data, or the people running the system do not correct it, AI systems can return faulty, biased information.

But you may not know that the same AI systems, even those we would consider to be free of bias in AI, can present a different and no less concerning outcome. By favoring the most common, normative or expected data, AI can subject unusual or outlier data to uncertainty.

Recommended: Daily AI Roundup: Biggest Machine Learning, Robotic And Automation Updates 8th October

It’s been well established that AI systems will replicate and often exacerbate the bias inherent in its training dataset. However, even when measures are taken to level the playing field, a subtle but equally undesirable result may occur because of prediction uncertainty.

Uncertainty, and especially unfairness in uncertainty, can be a complicated idea. Think about comparing two different GPS navigation apps. Both apps tell you similar expected travel times, but the first app is always within a minute or two of actual time while the routing of the second app results in actual travel times that can be 10 minutes faster or 10 minutes slower than the expected – which one would you use in that case?  And why does this situation arise in the first place?

An AI system’s certainty and accuracy in making predictions tends to increase with the amount of training data it sees. In our data rich world, it’s usually not an issue to collect more data. However, while some groups of people are well represented in commonly used datasets, other, marginalized groups are under-represented. When AI systems are asked to make predictions for marginalized groups, the answers it provides will be less predictable, accurate, or relevant than for a situation that’s well represented in the training data.

Arguably, an uncertain, unpredictable system is worse in some respects than one that’s predictably biased.

A biased system isn’t a good thing, but if the bias is known and quantified beforehand, adjustments are possible and people using its predictions can compensate. In contrast, the problem with uncertainty is that you don’t know what is going to happen.

Consider this example.

If you are not sure what is going to happen when you turn in a homework assignment or an essay in your class, you become less prepared to make adjustments or plan for the next assignment or essay than someone who knows with greater confidence what the outcome will be.

When aggregated together across the huge number of decisions made by and made about each person every day, even small differences in uncertainty can have enormous consequences.

To be clear, this isn’t an argument against AI in society, but rather a call to action to recognize that its enormous potential to improve the lives of all people comes with important considerations that shouldn’t be ignored.

In truth, I am more than a believer. I spend my days building and refining AI systems for my company. If you are not familiar with us, students, educators, and schools use our software to uphold academic integrity. We give students step-by-step, personalized guidance on writing techniques and source citation. We provide data to help teachers and schools identify authentic work from plagiarized, copied, recycled, or otherwise fake work. We also help teachers cut grading time and more efficiently give feedback to students.

We increasingly do these things with AI and algorithms. We use existing information to make assessments and strong, calculated guesses about the source of written materials or whether one error is sufficiently like another error to merit the same response.

This context is important, because it is a good example for discussing the unpredictability of AI.

One of the most hotly researched areas of AI is in automatic feedback and grading of long form writing such as essays and reports. This form of writing is not commonly used, it’s also enormously time consuming to grade. Unlike math problems or computer code, writing blends freedom of abstract, stylistic self-expression with the need to convey concrete ideas.

Building AI that provides feedback and scoring capabilities requires collecting [at least] thousands of human-scored essays, feeding them into a specifically designed natural language AI formulation, and allowing the model to learn associations between co-occurrences of words, phrases, syntax and punctuation, and human generated scores, which it stores as mathematical parameterizations.  Given enough training essays, the model can learn to mimic – and in some ways, exceed – human scoring performance on previously unseen writing that is statistically similar to the writing of the training set. It does so by using the stored parameterizations to perform a set of mathematical operations on the new data that renders an “answer” which we refer to as a prediction.

We’ve been developing technology to do this work for almost a decade, and we are the leaders in the field. We’ve also been judicious about deploying this technology because we understand its limitations.

Read More on AI ML: SocialGrep releases Intelligent Keyword Alerts for Reddit

As an example, consider the sentence from Toni Morrison’s Beloved: “Definitions belong to the definers, not the defined.” Show this extraordinary sequence of words to an essay grading AI that’s only been trained on typical English fluent middle school writing and it’s equally likely to deem the sentence as remarkable as it is to say that the sentence is repetitive and nonsensical. The particular mathematical parameterization of this AI is unable to make sense of the power of this sentence – it’s simply never seen anything like it before.

Of course, most writers aren’t Toni Morrison; however, the underlying issue still persists. AI models that are not shown enough representations of speech and writing patterns of writers from different ethnic, cultural and regional backgrounds begin to perform unpredictably when shown writing from those groups, while at the same time performing with high accuracy and low unpredictability for those in well-represented groups. The definers of the AI are the majority group and the definitions that the AI are operating with are not being defined by everyone equally,

Since the AI that my team builds is designed to help students, I think of a student whose writing or composition background and style are unique – not bad, just different from the norm present in the training data. And I think about the stress that the unpredictability of AI assessment must cause. Simply not knowing how a system based on predictable norms will handle the non-norm must be a terrible way to engage the process of teaching and learning – or anything else for that matter.

And although it is not a technology issue per se, I also wonder what systems based on making good guesses within established boundaries teach people with unusual inputs, with unusual writing styles in this example.

Are we inadvertently telling them to wrap up and tuck away their creativity and individuality?

Are we teaching them to write boring? To be “normal”? 

We believe that learning should help each individual become more of who they are by helping them fulfill their own potential, with their own style, voice, and direction. How can we build AI that helps accomplish this?

The good news is that there are two ways we can minimize those risks and tamp down the unpredictability penalty. Yet, as one might expect, neither is easy.

One way to get AI to do better at assessing outlying information is to be diligent about human review. When AI says the next William Faulkner is gibberish, a human needs to be in the oversight pathway to make the right determination. The AI needs to be constantly told what is what – this is actually good, that is actually not.

This approach is also useful for mitigating many of the harmful effects of bias in AI – people can spot it and override or counteract the result, reducing not only the adverse outcome but the possibility of reinforcing it for use in future, similar cases. This requires close cooperation of AI teams and product teams to build AI enabled experiences and products that give context to potential bias, highlight areas of low confidence in AI prediction and specifically bring in human experts to oversee and, if necessary, correct the AI predictions.

The second way of addressing the issue of unequal AI uncertainty is in improving the representation of marginalized groups in training data sets. On the surface, this sounds like the old adage “add more data,” but in reality, I mean that we need to add specific data that captures the enormous and wonderful tapestry of learners. Additionally, we need to make sure that the data’s labels (grades, tags, etc.) are carefully vetted by those who have the relevant cultural and lived experiences to the source of the data. This allows us to train AI that encodes, that’s context aware in ways most AI isn’t today.

Over the past few years, the power and peril of embedding AI into every aspect of our lives has become a mainstream topic – and I’m glad to see our society begin to grapple with these important questions. The way AI can actively propagate societal biases is now well understood, and efforts are already underway to mitigate their harmful impacts. We need to add unequal uncertainty to the conversation around AI fairness. Creating AI that works “better” for some groups and “worse” for others – even if on average the AI is fair – is still unfair and does not live up to our ideals.

[To share your insights with us, please write to sghosh@martechseries.com]

The post The Uncertainty Bias in AI and How to Tackle it appeared first on AiThority.

]]>
Integrations and Collaboration Are the Catalysts of Today’s Robotics Revolution https://aithority.com/machine-learning/decision-tree-learning/integrations-and-collaboration-are-the-catalysts-of-todays-robotics-revolution/ Tue, 06 Sep 2022 10:24:46 +0000 https://aithority.com/?p=444466 Integrations and Collaboration Are the Catalysts of Today’s Robotics Revolution Accessing the Full Power of Text Data as Part of Your Insights Framework

It may seem like a long time ago, but during the early stages of the Robotic Operating System (ROS) buzz, many companies were hesitant to adopt it into their development.  Generic protocols, software packages, and visualization tools were something that each company would have developed internally, again and again. This created a years-long process where […]

The post Integrations and Collaboration Are the Catalysts of Today’s Robotics Revolution appeared first on AiThority.

]]>
Integrations and Collaboration Are the Catalysts of Today’s Robotics Revolution Accessing the Full Power of Text Data as Part of Your Insights Framework

It may seem like a long time ago, but during the early stages of the Robotic Operating System (ROS) buzz, many companies were hesitant to adopt it into their development. 

Generic protocols, software packages, and visualization tools were something that each company would have developed internally, again and again. This created a years-long process where developers would have to reverse engineer technologies, or simply reinvent them— even if they already existed within competitor products.

Before the early days of what we now know as the automation revolution, Linux was considered good for the academy and hackers. Even Windows was competing to get a foot into the robotic market with Windows Robotic Studio. 

Back then, making a driver work often meant compiling your own Linux Kernel, reading through some obscure forums by the light of a candle, or as my lab professor would say “Here be dragons.” Developers back then shared a common living nightmare, where by the time real image data began streaming through C++ code, high powered graphic display drivers stopped working due to incompatible dependencies and Ubuntu would crash on boot. 

Top AI ML News: Why Intent Plus Sentiment Powered Contextual Targeting Is Winning the…

By now, more than a decade has passed and automation is the name of the game for developers who want to reach market needs at lightning speeds with greater reliability. ROS has come into the picture, allowing data visualization, simultaneous location and mapping (SLAM) algorithms, and navigating robots, something that anyone with some free time and a step-by-step tutorial can develop, test, and customize. Robotic Sensor / Platform vendors themselves are now singing ROS’ praises and releasing Git repositories with ready-made ROS nodes— the ones that they themselves used to test and develop the hardware. 

A shift in robotics revolution with new-age software development

This gives the impression that the basic growing pains of robotic software development are now long gone. Today, with off-the-shelf components and software component libraries, building your own robot has never been easier. All this is before we even talk about simulation tools and the cloud. 

Yet, for some reason, most of the robots created today are still closed boxes. They are not ROS-based, there’s no cloud-connectivity for OTA updates, and the OS cannot be updated. iRobot, for example, discussed their intention in 2019 to move away from a proprietary operating system to a ROS based one and is currently using ROS only for testing through their CREATE 3 platform. This is just one example. If you take a look at the robots around you, most of the issues they face will never be solvable and their behavior will not change greatly. Today’s ROS-based robots will be replaced by a whole new OS with a full-blown new ROS release in a new robot, marking most of the robots around us today obsolete. 

Remember when this was common for phones? Before iOS, Android, or any other 3rd generation technology? 

Breaking the cycle in Robotics revolution

Something to consider is that robotics developers and enthusiasts are a critical bunch. They analyze, review, and criticize openly when companies use closed software platforms or release robots with already outdated technology. 

Perhaps this is due to long development processes, where cloud-connectivity, FOTA, and other modern features were not around when they set out to build something new. Whole setups are hard to break down and reassemble. Program flows are not transferable. 

As early as 2012, companies were developing and releasing some basic behavior tree decision-making code to ROS. It took until ROS2 to see a behavior engine first used as a ROS standard component. This hits especially hard for developers who have tried to reconfigure move-base between robots, set up TF’s, and reconfigure thresholds for negative and positive obstacles when the sensor type or position changed. Imagine Updating its simulative model or Making sure its dependencies are met between the various ROS versions provided by the vendor.

This industry-wide pain has brought robotics engineers together to begin understanding common development hurdles and compiling low-code or no-code software packets that allow companies to get a head-start on their development. Relying on various operating systems and development tools, robotics startups are finding themselves on the same footing with the big industry players, allowing them to rapidly build their robotics foundation and focus on their proprietary technology. These software components can be organized, connected, and reassembled by code, console interface, or from the web using GUI, making anyone (even without ROS specific know-how) able to understand and see the various building blocks that compose the robot execution. The goal of deconstructing the mission to containerized blocks is also to untie the problematic coupling of OS and ROS versions by providing isolation and enabling using various ROS distributions on the same robot, including ROS1 and ROS2 components together.    

AI ML News: Ant Group Makes Trusted AI Solutions More Accessible to Support Industrial Collaboration in Digital Economy

Components can now be easily replaced making testing of alternative algorithms easier and robot access can be shared between operators and developers to allow remote access to the robot at any time. This does not require installing anything on the robot itself as all the installations are being managed by an agent running as a service on the robot. Multiple users can see live data or access the robot configuration and change it. 

If we want to make the robotics world a better place for generations to come, while also reaping the benefits of streamlined processes, it is vital that we look to the past on what robotics development was. Similar to the software development and SaaS worlds, collaboration in the robotics space is poised to usher in a new era of brilliant technology and previously unimaginable possibilities.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Integrations and Collaboration Are the Catalysts of Today’s Robotics Revolution appeared first on AiThority.

]]>
How Cognitive Psychology Principles Can be Applied to Knowledge Graphs https://aithority.com/machine-learning/decision-tree-learning/how-cognitive-psychology-principles-can-be-applied-to-knowledge-graphs/ Tue, 16 Aug 2022 05:34:21 +0000 https://aithority.com/?p=438478 How Cognitive Psychology Principles Can be Applied to Knowledge Graphs

There are many cognitive psychology principles involved in explaining human behavior. Today some of these principles can be applied to knowledge graphs to utilize advanced reasoning techniques, improve their accuracy with machine learning outputs, and dramatically increase their ability to fulfill mission critical objectives. An influential principle for human problem solving originates from Allen Newell’s […]

The post How Cognitive Psychology Principles Can be Applied to Knowledge Graphs appeared first on AiThority.

]]>
How Cognitive Psychology Principles Can be Applied to Knowledge Graphs

There are many cognitive psychology principles involved in explaining human behavior. Today some of these principles can be applied to knowledge graphs to utilize advanced reasoning techniques, improve their accuracy with machine learning outputs, and dramatically increase their ability to fulfill mission critical objectives.

An influential principle for human problem solving originates from Allen Newell’s Unified Theories of Cognition and Human Problem Solving.  The principle is that human problem solving can always be described as search in problem spaces. A problem space formally consists of states and operators where a state is any configuration of all the objects and the inter-object relationships that are relevant to a problem. When solving a problem you begin in the ‘begin’ state, and you try to get to the ‘end state’ or solution state. You move forward from the begin state to the end state by repeatedly applying operators to intermediate states.

The game of chess is a good example. The begin state is obviously the board that you just set up, the end state is that you checkmated your opponent. In order to get to the end state you have to apply operators, in this case the operators are obviously the valid moves in the chess game.

Problem solving is about choosing the most appropriate operator and any point in time.

For a beginner just learning ‘what’ chess operators are available is a difficult task in itself. But once you know those rules the more difficult task  is to make the ‘right’ move. Over time people that play chess learn an enormous amount of rules or patterns that help with making the right or best move given the current configuration of the board.

Two Ways Humans Learn

There are two major ways of learning which operator to apply. The first one is to try each move in your mind and evaluate the soundness of the new configuration of the problem state. You might do this recursively several levels deep and at some point you make a move. The other way of learning is to just do a move in the real world, and then learn the consequences of this move given this particular situation. Both ways of learning are supported.

The principles of human problem solving and learning described by Newell are all couched in the language of graphs and symbolic rules and patterns, and the more recent versions of Newell’s theories also include machine learning from external operations as a learning and feedback mechanism. Modern intelligent knowledge graphs can borrow many principles from Newell’s work to create learning and self modifying systems.

Applying Cognitive Psychology Principles to a Healthcare Knowledge Graph

Let’s consider a knowledge graph that is a digital twin of a hospital. This knowledge graph represents all relevant entities in a hospital, that is, it knows all patients, nurses, doctors, beds and expensive equipment in a hospital environment. Importantly, it even knows the location of every entity at any point in time. The location is known at a resolution of a few centimeters through RFID and other localization techniques.

Recommended: Google AI Team Explain How ML and Semantic Engines Improve Developer Productivity

In this example, the begin state is the beginning of a shift where we have the current configuration of all the entities. The end state is the end of the shift where every patient is visited a sufficient number of times and there is a  minimal amount of suffering for both patients and hospital staff. The operators in this example are every move or visit that a nurse and doctor can make from one entity to another.

There are several factors that make scheduling all these ‘operators’ in a fixed plan not trivial. To begin with: it is sometimes hard to predict how long it will take for a doctor to be with a patient, patients might get unexpected emergencies, and to make matters even more complex, patients can make ‘illegal’ moves such visiting the bathroom when they are not fit to do so.  The goal of this knowledge graph is to recommend a schedule at the beginning of a shift, in the full realization that unexpected events can completely change the schedule at any point in time.

When creating the initial plan we look at the state of each patient, but also the availability of nurses and doctors and the goal of minimizing suffering. In the planning process we use symbolic rules and constraints (i.e Patient P cannot get out of bed because of two broken legs; nurse N cannot lift patient P out of bed due to being not strong enough; doctor D is not specialized in the disease of patient P etc.)

But as we mentioned above, during the execution of the plan unexpected things happen: patients get worse, doctors spend more time than expected in surgery, etc. And one can learn from these unexpected events both by learning explicit symbolic rules or by using statistical and machine learning techniques. For example: every time when we get in this particular situation and we respond with this action the outcome is negative. With enough samples we can start learning machine learning rules.

AI ML Insights: Biomed Expert Backs AI’s Role in Improving Human Longevity

The Result: A Self-Modifying System

Using cognitive psychology principles in knowledge graphs  can create a virtuous cycle between symbolic reasoning and machine learning – producing a self-modifying system. Self-modify systems can be applied in many domains, but they are especially useful when developing digital systems for power generation plants, manufacturing operations, healthcare services, in the automotive industry, and urban planning.

In all these domains symbolic rules will get you very far but ultimately there are always unexpected events that will force you to fine tune with statistics and machine learning techniques. 

Top News: Analysis of Data and AI Penetration Across various Functions

[To share your insights with us, please write to sghosh@martechseries.com]

The post How Cognitive Psychology Principles Can be Applied to Knowledge Graphs appeared first on AiThority.

]]>
DeepNash and the World of Model-free Multi-agent Reinforcement Learning (RL) https://aithority.com/machine-learning/deepnash-and-the-world-of-model-free-multi-agent-reinforcement-learning-rl/ Mon, 11 Jul 2022 15:00:15 +0000 https://aithority.com/?p=426142 DeepNash and the World of Model-free Multi-agent Reinforcement Learning (RL)

DeepMind team has managed to train an agent that can play and beat other AI Bots Stratego—the world’s most complex board game with upto 97% win-rate. The agent is called DeepNash, a powerful Reinforcement Learning-based multi-agent that is built on game theory, neural networking and deep learning capabilities. Stratego, the new-age strategy board game has […]

The post DeepNash and the World of Model-free Multi-agent Reinforcement Learning (RL) appeared first on AiThority.

]]>
DeepNash and the World of Model-free Multi-agent Reinforcement Learning (RL)

DeepMind team has managed to train an agent that can play and beat other AI Bots Stratego—the world’s most complex board game with upto 97% win-rate. The agent is called DeepNash, a powerful Reinforcement Learning-based multi-agent that is built on game theory, neural networking and deep learning capabilities.

Stratego, the new-age strategy board game has piqued the interest of AI researchers around the world. It has been regarded as one of the most-complex modern-day battle strategy board games with a very density of incomplete information. For decades, AI scientists have been trying to teach computers play this complex game, but unlike chess, the Reinforcement learning (RL) algorithms fell short of the expectations. But, now, DeepMind AI team has built a semi-supervised multi-agent reinforcement learning algorithm to teach machines how to play and win this amazing game of brains. The DeepMind AI researchers call this autonomous agent RL model- -DeepNash. DashNash is a model-free multi-agent based on Regularized Nash Dynamics (R-NaD), combining the advancing algorithms of deep neural networks with ᵋ – Nash Equilibrium (epsilon equilibrium or near-Nash equilibrium) and advanced deep learning capabilities.

More from DeepMind AI: DeepMind’s AlphaFold2 Solves 50-year Old Protein Fold Challenge

Effectively, DeepNash has a very high rate of winning percentage against all the other AI-based bots that have been trained to play Stratego. DeepMind found DeepNash has a very high win-rate performance against AI bots (97%)  as well as human players (84%) on Gravon. platform.  In 2022, DeepNash emerged among the Top-3 players in the all times leaderboards.

DeepNash played Stratego against these AI bots.

  • Probe
  • Master of the Flag
  • Demon of Ignorance
  • Asmodeus
  • Celsius
  • Celsius 1.1
  • PeternLewis
  • Vixen

Results of their competition have been published by DeepNash below.

Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
Mastering the Game of Stratego with Model-Free
Multiagent Reinforcement Learning

Perfect Information Games, Imperfect Information Games and the World of Model-based Reinforcement Learning

To understand the world of how information is used in games, AI researchers have to master the Game Theory. Game Theory is a mathematical approach to develop, analyze and understand how different mathematical models can be used to identify the actions and engagements among rational agents/ rational beings such as a human, a machine, software, or a bot. the first mention of Game Theory or related strategies was made in 1928, when famous polymath John Von Neumann published the paper, “On the Theory of Games of Strategy.” In 1944, he followed up on his paper by co-authoring “Theory of Games and Economic Behavior” with Oskar Morgenstern. However, the real push to Game Theory came in the the 1950s when John Nash proposed “Nash Equilbrium” for mixed strategies involving n-players, non-zero-sum games. Between 1950s and 2000s, scientists and mathematicians worked in sync to develop many theories and approaches to understand the strategies involved in different games (cooperative versus non-cooperative, symmetrical vs. unsymmetrical, sequential vs. simultaneous, and perfect information vs. imperfect or incomplete information) and Bayesian games.

Tic-tac-toe, Go, Chess and Checkers are examples of perfect information sequential games. On the other hand, Stratego, like Poker and Bridge (or most card games) is an incomplete / imperfect information sequential game. Multiplicity of imperfect information models could give rise to more game models such as Bayesian, Combinatorial and Evolutionary and Infinite long games.

In a imperfect information sequential game like Stratego, there are 10535 decision nodes or possible states in the Decision Tree.

Unlike in Chess or Go where it is possible to train an agent using Nash Equilbrium in a model-based RL it is impossible to do the same for Stratego. There are two reasons for this limitation with model-free RL models. First, Stratego is an imperfect information game. Second, search options for Stratego is intractable as Nash Equilibrium can’t be used to estimate private information in public states. This limitation is solved by adopting R-NaD, an advanced RL approach used to train a model-free agent using Nash Equilibrium. These approaches can be used to train multiple agents and hence, DeepNash is a multi-agent model-free RL algorithm.

How DeepNash Works?

DeepNaskh leverages the idea of regularization in R-NaD algorithm achieved through deep neural network. R-NaD, the core training model-free RL algorithm is implemented using Deep Neural Network, and then fine-tuned to remove probability mistakes.

DeepNash is able to hide information from other opponents in an effective manner by adjusting trade-offs in its favor. Agent is also able to deceive and bluff opponents when required — a highly advanced model-free RL training that only DeepNash has been able to achieve using R-NaD and deep residual neural network.

Click here to learn more about DeepNash and its performance against other bots and human agents.

[To share your insights with us, please write to sghosh@martechseries.com]

The post DeepNash and the World of Model-free Multi-agent Reinforcement Learning (RL) appeared first on AiThority.

]]>
Abacus.ai Publishes Paper on ‘Explainable Machine Learning’ for NeurIPS 2021 https://aithority.com/machine-learning/neural-networks/deep-learning/abacus-ai-publishes-paper-on-explainable-machine-learning-for-neurips-2021/ Thu, 28 Oct 2021 11:29:41 +0000 https://aithority.com/?p=345869 Abacus.ai Publishes Paper on 'Explainable Machine Learning' for NeurIPS 2021

Explainable Machine Learning is a sub-field within Data Science and Artificial Intelligence (AI). It is also referred to as X-ML or XML, and projected to be the next biggest avenue for all AI and machine learning applications in the future. Abacus.ai, a leading AI startup has made substantial progress in the field of Explainable Machine […]

The post Abacus.ai Publishes Paper on ‘Explainable Machine Learning’ for NeurIPS 2021 appeared first on AiThority.

]]>
Abacus.ai Publishes Paper on 'Explainable Machine Learning' for NeurIPS 2021

Explainable Machine Learning is a sub-field within Data Science and Artificial Intelligence (AI). It is also referred to as X-ML or XML, and projected to be the next biggest avenue for all AI and machine learning applications in the future. Abacus.ai, a leading AI startup has made substantial progress in the field of Explainable Machine Learning, which has been published in its latest paper. This paper is all set to appear at Neural Information Processing Systems (NeurIPS) Conference 2021, to be held between 7-10 December later this year.

Explainable Machine Learning or XML is tested based on three key parameters – transparency, interpretability, and explainability. For any plain machine learning model to qualify as an XML algorithm, it should be understood using concepts of human-level intelligence. In recent years, significant developments have been made in this area with an aim to bring AI and Deep Learning models out of the conventional “black box’ domains. As per IBM, machine learning models are often thought to be behaving as black-boxes that are hard to interpret.

Role of AI ML in Biggest Industries: The Role Of AI, Data And Analysis In True Digital Transformation In Materials And Chemistry R&D

In its latest paper on XML, Abacus.ai has released the workflow associated with XAI-BENCH. XAI-BENCH is a battery of synthetic datasets for “benchmarking popular feature attribution algorithms.” The synthetic dataset could be configured and re-engineered to simulate real-world data using popular explainability techniques across several evaluation metrics.

AI is becoming advanced and human brains behind this trend associate the evolution to powerful XML techniques which are entrusted to bring computing out of black-box approaches. The black box legacy within conventional AI ML algorithms is so deeply entrenched that it would require much more than publishing papers on XML. Abacus.ai is putting its brain and brawn behind XML models to help scientists and AI engineers understand the various ways they can create an algorithm that humans can understand and evaluate what’s happening inside the ‘black-box’ of the AI ML field.

Role of Explainable Machine Learning in Modern Data Science

Explainable Machine Learning or XML is already influencing the penetration of advanced AI in various industries. Some of the key applications of XML in the modern era have been listed below:

In healthcare and telemedicine: XML is used to optimize image analysis, diagnostics, and decision-making for patient management processes;

In banking and loan approval systems, where XML is used to evaluate credit health and financial fraud risks;

In blockchain and crypto, where XAI  and machine learning algorithms can be used to fully secure and decentralize the “highly sensitive system for storing and processing AI-generated data”, and so much more…

As we continue to trace the next phase of advanced AI growth in the marketplace, it is expected that companies like Abacus.ai would emerge as the top contributors of trustworthy AI abilities that break the conventional mold of black-box modeling.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Abacus.ai Publishes Paper on ‘Explainable Machine Learning’ for NeurIPS 2021 appeared first on AiThority.

]]>
OctoML Accelerates ML Innovation Across Broad Array of Arm Hardware and Embedded Environments https://aithority.com/machine-learning/octoml-accelerates-ml-innovation-across-broad-array-of-arm-hardware-and-embedded-environments/ Wed, 20 Oct 2021 17:00:18 +0000 https://aithority.com/?p=343024 OctoML Accelerates ML Innovation Across Broad Array of Arm Hardware and Embedded Environments

Apache TVM furthers unified ML software stack from bare-metal microcontrollers all the way up to cloud silicon

The post OctoML Accelerates ML Innovation Across Broad Array of Arm Hardware and Embedded Environments appeared first on AiThority.

]]>
OctoML Accelerates ML Innovation Across Broad Array of Arm Hardware and Embedded Environments

OctoML has announced a collaboration with Arm to deploy next-generation machine learning (ML) applications and models across its suite of hardware. The partnership enables Arm partners to upload machine learning models to the OctoML platform and receive optimized and packaged versions of the model fine-tuned to Arm® hardware.

Arm is also contributing engineering effort to Apache TVM which provides companies developing ML on Arm technology one unified software stack to deploy models seamlessly across microcontrollers, GPUs, NPUs and CPUs.

What’s OctoML?

OctoML is a machine learning acceleration platform based in Seattle, Washington. OctoML aims to accelerate model performance while enabling seamless deployment of models across any hardware platform, cloud provider, or edge device. The company’s investors include Addition, Madrona Venture Group, and Amplify Partners. OctoML was founded by creators of open-source Apache TVM, CEO Luis Ceze, CTO Tianqi Chen, CPO Jason Knight, Chief Architect Jared Roesch, and VP of Technology Partnerships Thierry Moreau.

Apache TVM is a software framework that provides a unified layer between the leading machine learning frameworks—like PyTorch, TensorFlow—and the vast array of hardware solutions available. This innovation means that ML models can be deployed anywhere from cloud to edge to mobile.

Arm’s open source efforts in providing Apache TVM support to Arm Cortex®-A processors, Cortex-M processors, Arm Mali™-GPUs and Arm Ethos™-NPUs, ensures that data scientists, software and embedded developers can now use a single software stack that works on the billions of chips based on Arm technology.

“Optimizing and deploying machine learning workloads across a diverse array of hardware is difficult, especially when working with embedded environments,” said Luis Ceze, CEO and co-founder, OctoML.

Luis added, “Arm has been an incredible partner in the open source Apache TVM community, especially in making ‘TinyML’ a reality on devices that lack a fully-fledged operating system. It’s great to see our collaboration with Arm now extend to our SaaS platform where their customers can both speed up deploying models and also enable new ML-based use cases that were not previously viable.”

Built on the Apache TVM open source framework, OctoML’s Platform provides an automation framework that optimizes trained models to achieve optimal performance across a breadth of hardware endpoints and cloud services—without compromising accuracy. The platform readily addresses the challenge of optimizing ML models to match the resources at the edge, which opens up opportunities for a new wave of intelligent apps.

“Arm has been committed to Apache TVM since its earliest days and we believe it is a key enabling technology for the ML ecosystem,” said Steve Roddy, Vice President of Product Marketing, Machine Learning at Arm. “OctoML and the TVM community have excelled at pushing the boundaries of where ML can run, and our continued collaboration with partners like OctoML will empower the industry to develop innovative new AI applications.”

The post OctoML Accelerates ML Innovation Across Broad Array of Arm Hardware and Embedded Environments appeared first on AiThority.

]]>
Building a Site Structure for Humans using SEO Benchmarks https://aithority.com/machine-learning/evolutionary-systems/building-a-site-structure-for-humans-using-seo-benchmarks/ Fri, 15 Oct 2021 07:00:34 +0000 https://aithority.com/?p=340150 Sustainability Takes Centre Stage at the Special Expo 2020 Dubai Edition of the Canon Frontiers of Innovation Series Netrush Acquires AI Martech Platform Sellozo to Support Billions in Transactions Building a Site Structure for Humans using SEO Benchmarks

Since the beginning of the internet, digital marketers, specifically SEOers, have been chasing Google’s algorithm with every change made based upon the latest algorithm update while forgetting that Google is in fact catering to humans. Humans are the core of what we do as marketers, and as such, they should be at the center of […]

The post Building a Site Structure for Humans using SEO Benchmarks appeared first on AiThority.

]]>
Sustainability Takes Centre Stage at the Special Expo 2020 Dubai Edition of the Canon Frontiers of Innovation Series Netrush Acquires AI Martech Platform Sellozo to Support Billions in Transactions Building a Site Structure for Humans using SEO Benchmarks

Since the beginning of the internet, digital marketers, specifically SEOers, have been chasing Google’s algorithm with every change made based upon the latest algorithm update while forgetting that Google is in fact catering to humans.

Humans are the core of what we do as marketers, and as such, they should be at the center of every strategy and action. When it comes to planning your site structure, the same should be applied.

WHAT IS SITE STRUCTURE?

Site structure is the way you group, link and present the content, services and products of your site to the users. In summary, it would be how you organize your website’s content. The site structure can sometimes also be referred to as a taxonomy within the website.

Practicality and experience should be balanced in order to achieve a visually appealing site and an organized structure that is intuitive to the customer.

More on SEO Updates: Semrush Wins the Best SEO Software Suite at Global Search Awards

WHY SHOULD YOU CARE?

There are two main reasons why brands should care about site structure. These are:

Good Housekeeping: Just like an organized closet that makes it much easier to find any garment of clothing, a well-structured site will help your business keep a clean website with no duplicated content, minimal 404 pages and a seamless user experience.

Prioritization of Content: Google will better understand the prioritization of the content as it will be structured based on importance, helping with better rankings and optimized crawling.

Top AIThority News: Elevatus Surpasses One Million Video Assessments Completed Worldwide

HOW DOES SITE STRUCTURE AFFECT SEO?

Google’s objective is to provide users with the most useful, accurate and accessible results for their searches, so much so that Google has rolled out a series of new ranking signals based on its core web vitals that refer directly to user experience. This means we also need to understand the importance of having humans and human behavior at the core of the site structure planning.

But there are many other areas in which an optimized site structure will help us improve our visibility:

Site Crawlability: you will also help Google making the crawls on your site more efficient, getting many benefits from it such as:

  • Getting crawled more often

  • Getting most of your pages, if not all, crawled at once

Indexability: an optimized site structure will allow robots to crawl your pages, increasing the chances of having all of them indexed faster

Cannibalization: thanks to the site being properly structured you’ll be able to give Google an indication of the priority of the pages of your site, as well as point out which pages are secondary or subpages of the main topic

Duplicated content: when the site is structured, the content follows a path, which reduces the chances of creating or publishing the same content more than once

Internal linking & link authority: An optimized internal linking strategy will ensure a healthy link flow passing through all pages, from the home page to the latest created page

HOW DOES AN OPTIMISED SITE STRUCTURE HELP USERS?

Using Neuroscience principles, we can improve user experience, as this is key to understanding how the human brain works and how our site can have an impact on it. The main principles to have in mind when building the site structure, and our site in general, are:

Subconscious and the first impression

The subconscious mind is the most powerful as it notices things faster than our consciousness, it is also responsible for our emotions

The subconscious can judge a site in milliseconds; hence the importance of design and the impact this has, as if a website feels right, people will trust it.

First impression is critical, if the site doesn’t feel trustworthy and reliable, users are unlikely to come back

More on Content Marketing: The Important Difference Between Cohorts And Segments

Simplicity

Humans nowadays have stopped reading content and tend to scan through headlines, which forces marketers to ensure it is properly structured and highlighted so they can reach the destination faster

Logic

It is extremely important that your site flows properly, arranging the elements to create a natural dialogue will enhance user experience

Avoid questions that may pop in the users’ heads while navigating your site and menu: “What’s this?” “Where should I find x products?” “How did I get here?”

In summary, our brain requires a certain order and structure to make sense of the content presented in front of us. The way it is presented can and should make us attracted to the brand whilst the content should engage and take us on the website’s intended path.

Our main goal is for the users (and crawlers) to find the solution to their problems fast and seamlessly. Although, following these principles will also help us convert those users into loyal customers that can engage and bond with the site, coming back again and again.

MORE WAYS OF HELPING USERS NAVIGATE YOUR SITE

1. Welcoming users through the homepage

Your home page, the place where you welcome the users, the nucleus of your site and usually, the page with the highest traffic and incoming links, makes it the perfect starting point to link to your most important pages.

Humans like order, simplicity and logic, so we should follow this pattern to create and link from our homepage.

How can we create a homepage that is catered for humans, yet SEO and crawler-friendly?

Just follow these simple steps:

  • It should take no more than 5 seconds to identify what the page is about, so be sure this is clear at the top of your homepage

  • It should ease the user down the intended path to purchase, by proving the right content/product/service recommendations, not necessarily straight to a conversion page

  • Call to actions should be clear and stand out from the rest of the content. We are the ones guiding the user through our content, which can be done via CTAs

  • Most important links/ categories/ products should be placed here

2. The navigation menu: the user’s guide to your site

The menu is key for users to understand the structure of the site. When structuring it we should ensure that it follows logic and leaves no questions needing to be asked, as the Neuroscience principles have taught us. The goal is to ease the path for users to find their answers as quickly as possible, avoiding unnecessary complications.

Sephora Makeup navigation is a great example of this as it clearly defines and categorizes the products based on body areas, which makes finding products easy for both makeup experts and beginners:

sephora-website-structure (1)
source: Reflect Digital

3. Helping users find their way with breadcrumb trails

Just like maps in shopping centres that show you where you are and guide you to where you want to go, these clickable paths are usually added to the desktop version of the site. It helps users go back to different related pages as well as understand where they are within your site structure. It also helps crawlers to understand where on the site this page is located as well as its priority and relationship with other pages.

In summary, we can assure that order and structure are key to start planning your human-friendly site structure and that having humans at the core of your strategy will not only increase current visitors and conversions, but it could potentially have a positive effect on the lifetime value of the customers as well.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Building a Site Structure for Humans using SEO Benchmarks appeared first on AiThority.

]]>
Introducing Neo4j for Graph Data Science, the First Enterprise Graph Framework for Data Scientists https://aithority.com/machine-learning/decision-tree-learning/introducing-neo4j-for-graph-data-science-the-first-enterprise-graph-framework-for-data-scientists/ Thu, 09 Apr 2020 11:03:42 +0000 https://aithority.com/?p=106673 Introducing Neo4j for Graph Data Science, the First Enterprise Graph Framework for Data Scientists

Organizations Can Address Previously Intractable Questions Using the Network Structures in Data for Better Analytics and Machine Learning Neo4j, the leader in graph technology, announced the availability of Neo4j for Graph Data Science, the first data science environment built to harness the predictive power of relationships for enterprise deployments. The unpredictability of the current economic climate […]

The post Introducing Neo4j for Graph Data Science, the First Enterprise Graph Framework for Data Scientists appeared first on AiThority.

]]>
Introducing Neo4j for Graph Data Science, the First Enterprise Graph Framework for Data Scientists

Organizations Can Address Previously Intractable Questions Using the Network Structures in Data for Better Analytics and Machine Learning

Neo4j, the leader in graph technology, announced the availability of Neo4j for Graph Data Science, the first data science environment built to harness the predictive power of relationships for enterprise deployments.

The unpredictability of the current economic climate underscores the need for organizations to get more value out of existing datasets, continually improve predictive accuracy and meet rapidly changing business requirements. Neo4j for Graph Data Science helps data scientists leverage highly predictive, yet largely underutilized relationships and network structures to answer unwieldy problems. Examples include user disambiguation across multiple platforms and contact points, identifying early interventions for complicated patient journeys and predicting fraud through sequences of seemingly innocuous behavior.

Graph data science helps solve problems from fraud to personalization and drug repurposing in various industries. Visualized in Neo4j Bloom.
Graph data science helps solve problems from fraud to personalization and drug repurposing in various industries. Visualized in Neo4j Bloom.
Neo4j Bloom provides a visual exploration of a financial transaction graph and the results of graph algorithms that are used for feature engineering that informs machine learning models.
Neo4j Bloom provides a visual exploration of a financial transaction graph and the results of graph algorithms that are used for feature engineering that informs machine learning models.

Neo4j for Graph Data Science combines a native graph analytics workspace and graph database with scalable graph algorithms and graph visualization for a reliable, easy-to-use experience. This framework enables data scientists to confidently operationalize better analytics and machine learning models that infer behavior based on connected data and network structures.

Recommended AI News: AiThority Interview with Anton van den Hengel, Director of Applied Science at Amazon

Alicia Frame, Lead Product Manager and Data Scientist at Neo4j, explained why Neo4j for Graph Data Science is the most expeditious way to generate better predictions.

“A common misconception in data science is that more data increases accuracy and reduces false positives,” explained Frame. “In reality, many data science models overlook the most predictive elements within data – the connections and structures that lie within. Neo4j for Graph Data Science was conceived for this purpose – to improve the predictive accuracy of machine learning, or answer previously unanswerable analytics questions, using the relationships inherent within existing data.”

Take fraud analysis, such as detecting identity fraud and fraud rings, as an example that spans areas from financial services and insurance to the government sector and tax evasion. Even the smallest predictive improvement translates into millions of dollars of savings. Neo4j for Graph Data Science makes it easier to make those incremental improvements without altering existing machine learning pipelines. Below are some simple steps illustrating how Neo4j for Graph Data Science fits into a fraud prediction workflow:

  1. A data scientist can reveal suspicious groups of transactions using community detection algorithms, like Connected Components, to analyze behavior.
  2. They can then dive deeper by applying graph algorithms such as Betweenness Centrality or PageRank to uncover hidden structures such as accounts with unusual influence over the flow of money or information.
  3. An analyst could explore these clusters in an intuitive way and collaborate with fraud experts using Neo4j Bloom to infer which elements (i.e., features) are most likely predictive of criminal behavior.
  4. They can perform “what if” analyses or even chain “recipes” of graph algorithms together with a mutable in-memory workspace where their graphs are reshaped on-the-fly.
  5. Once the algorithmic recipes have been validated and understood, they can be used for machine learning models that are operationalized to proactively prevent – and not merely detect – fraud.

Recommended AI News: AiThority Interview with Joseph Mossel, CEO at Ibex Medical Analytics

Neo4j for Graph Data Science enables data scientists to answer questions that are only addressable through understanding relationships and data structures. Graph algorithms are a subset of data science tools that capitalize on network structure to infer meaning and make predictions such as:

  • Cluster and neighbor identification through community detection and similarity algorithms
  • Influencer identification through centrality algorithms
  • Topological pattern matching through pathfinding and link prediction algorithms

With Neo4j for Graph Data Science, teams confidently deploy a proven solution at massive scale to run optimized graph algorithms over tens of billions of nodes with production features such as deterministic seeding, which provides starter values and consistent results for reproducible machine learning workflows. Through intelligent integration of network analytics and a database, Neo4j automates data transformations so users get maximum compute performance for analytics and native graph storage for persistence.

Ben Squire, Senior Data Scientist at Meredith Corporation, a leading media and marketing services company with publications reaching 190 million unduplicated American consumers every month, including nearly 95 percent of U.S. women, across broadcast television, print, digital, mobile, voice and video, shared his experience with Neo4j for Graph Data Science.

Recommended AI News: Geospace Subsidiary Quantum Technology Sciences Awarded US Homeland Security Contract

“Providing relevant content to online users, even those who don’t authenticate, is essential to our business,” said Squire. “We use the graph algorithms in Neo4j to transform billions of page views into millions of pseudonymous identifiers with rich browsing profiles. Instead of ‘advertising in the dark’, we now better understand our customers which translates into significant revenue gains and better-served consumers.”

Dr. Alexander Jarasch, the Head of Data and Knowledge Management at the German Center for Diabetes Research (DZD) and collaborator on COVIDgraph.org, explained how Neo4j for Graph Data Science offers an intuitive data science experience with logical parameters and Neo4j Bloom for comprehensive graph exploration.

“Nothing is more pressing today than understanding COVID-19,” said Jarasch. “Graphs give us the ability to bring together the salient information around this confounding disease and provide a synthesized view across heterogeneous data. Today’s understanding of this coronavirus is severely hampered by minimal peer-reviewed research and the absence of long-term clinical trials. Neo4j for Graph Data Science will help us to identify where we need to direct biomedical research, resources, and efforts.”

The post Introducing Neo4j for Graph Data Science, the First Enterprise Graph Framework for Data Scientists appeared first on AiThority.

]]>