Scientists Decipher Through Machine Learning that Dogs Build Stronger Connections with Motion than with Humans Who Do the Actions

A new study published in The Journal of Visualized Experiments reveals that dogs react to actions rather than to who or what is doing the action. The information was decoded through machine learning.

Dog Experiment

To get an accurate record, the researchers used the fMRI neural data of the dogs to be examined. There are only two mixed-breed dog participants named Daisy and Bhubo. Daisy may be part Boston terrier, and Bhubo may be part boxer. During the test, both were awake and unrestrained.

The dogs watched three 30-minute sessions, totaling 90 minutes of video, as the researchers collected the fMRI brain data . They next examined the patterns in the brain data using a machine-learning system.

The findings imply that dogs are more tuned into environmental actions than they are to the person performing the deed or something that is moving.

According to Gregory Berns, an Emory professor of psychology and the paper's co-corresponding author, the study demonstrated that it is possible to track a dog's brain activity while it watches a video and, to a certain extent, recreate what it is seeing. He said that it was amazing that they were able to accomplish it.

Despite the fact that it took so long for them to finish watching, the dogs didn't require rewards. One of the scientists who kept an eye on the animals during the fMRI sessions watched the dogs' eyes as they followed the video.

Human Experiment

An identical experiment was performed on two people who sat in an fMRI and watched the same 30-minute video three times. Using time stamps, the brain data might be mapped onto the video classifiers.

A neural net known as Ivis, which is a machine-learning algorithm, was applied to the data. A neural net is a machine learning method that uses a computer to examine training examples. In this instance, the neural network was trained to categorize the content of the brain data.

According to the findings for the two human participants, the model was created using a neural network that mapped brain data to object-and action-based classifiers with 99% accuracy.

ALSO READ: Machine Learning Used to Formulate Customized Odors to Determine Specific Smell Impression

Decoding of the Results

According to Eurekalert, the model failed to produce accurate results for the object classifiers when decoding video information from the dogs. However, when it came to identifying the dog's actions, it was 75% to 88% accurate.

The findings imply significant variations between the functioning of the human and canine brains.

"We humans are very object-oriented," Berns said.

He added that the English language has 10 times as many nouns as verbs because we are particularly obsessed with naming things. On the other hand, dogs seem more focused on the action and less bothered by who or what they are witnessing.

He said that there are significant distinctions between the visual systems of dogs and humans. Dogs only see in blue and yellow tones, but they have a somewhat higher density of motion-detecting vision receptors than humans.

Dogs' brains being highly oriented to behaviors first and foremost makes perfect sense, the author claims. To avoid being eaten or to keep an eye on animals they might want to hunt, animals need to be very aware of what is happening in their environment. Movement and action are crucial.

RELATED ARTICLE: Machine Learning and AI Can Now Create Plastics That Easily Degrade

Check out more news and information on Technology in Science Times.



Join the Discussion

Recommended Stories

Real Time Analytics