Facebook is using AI to simulate users’ bad behavior.

Facebook’s engineers have developed a new way to help them identify and prevent harmful behaviors, such as spamming users, defrauding others or buying and selling weapons and drugs,media The Verge reported. They can now use artificial intelligence-driven robots to simulate the behavior of bad actors and allow them to move freely on parallel versions of Facebook. The researchers could then study the robot’s behavior in simulations and experiment with new ways to stop them.

Facebook is using AI to simulate users' bad behavior.

The simulator, known as WW, is pronounced “Dub Dub” and is based on Facebook’s real code base. The company published a paper on WW earlier this year (which is called because the simulator is a cut-out version of WWW) but shared more information on the work at a recent roundtable.

The study was led by Facebook engineer Mark Harman and researchers from the company’s London-based artificial intelligence division. Harman told reporters that WW is a very flexible tool that can be used to limit harmful behavior on websites, and he cites the use of simulation to develop new defenses against fraudsters.

In real life, scammers often start looking for potential targets from a group of friends who hover around the user. To simulate this behavior in WW, Facebook engineers created a set of “innocent” robots to target and trained “bad” robots to explore the network in an attempt to find them. Engineers then experimented with different ways to stop “bad” robots, introducing restrictions such as limiting the number of private messages and posts that robots can send per minute to see how it affects their behavior.

Harman compared the work to city planners’ attempts to reduce speeding on busy roads. In this case, the engineer simulates traffic flow in the simulator and then experiments with things like the introduction of deceleration zones on some streets to see what they do. WW simulation allows Facebook to do the same, but targets Facebook users.

“We apply ‘deceleration belts’ to the actions and observations our robots can perform, and explore the changes we may make to the product so quickly that we suppress harmful behavior without harming normal behavior,” Harman said. “We can expand this scale to tens or hundreds of thousands of robots, so we search in parallel for many different possibilities… Constraint vector. “

Simulation scouring is a common practice in machine learning, but the WW project is notable because it is based on a real-life version of Facebook. Facebook calls its approach “web-based simulation.” Unlike traditional simulations where everything is simulated, in network-based simulations, actions and observations actually take place through real infrastructure, so they are more real. Harman said.

However, he stressed that despite the use of this real infrastructure, robots are unable to interact with users in any way. “They can’t actually interact with anything other than other robots through structures,” he said. “

It’s worth noting that this simulation is not a visual copy of Facebook. Don’t imagine scientists studying the behavior of robots, just as you would in a Facebook group looking at human interactions. WW doesn’t produce results through Facebook’s GUI, but records all interactions as digital data.

Now, WWW is still in the research phase, and the company’s simulation experiments with robots have not brought real-life changes to Facebook. Harman says his team is still testing to see if simulations and real-life behaviors are high enough to prove changes in real life. But he thinks the work will change Facebook’s code by the end of the year.

Of course, simulators have limitations. For example, WW cannot simulate user intent or complex behavior. Facebook says the bot searches, makes friend requests, leaves comments, posts and sends messages, but the actual content of these behaviors, such as conversational content, is not simulated.

But Harman says the strength of WW is its ability to do so on a large scale. It allows Facebook to run thousands of simulations to examine subtle changes to the site without affecting users and discovernew patterns of behavior. “I don’t think the statistical power that big data brings is fully understood, ” he says. “

One of the more exciting aspects of this work is the possibility that WW will find new weaknesses in Facebook’s architecture through the robot’s actions. Robots can be trained in a variety of ways. Sometimes they are given clear instructions on how to act, sometimes they are asked to imitate real-life behavior, and sometimes they are simply given a certain goal to decide what to do for themselves. It is in the latter case (a method known as unsupervised machine learning) that unexpected behavior may occur because the robot has found a way that the engineer did not predict to achieve its goals.

“At the moment, the main concern is to train robots to mimic what we know is happening on the platform. But in theory and in practice, robots can do things we haven’t seen before. Harman said. This is actually what we want, because we end up trying to get ahead of bad behavior, not keep catching up. “

Harman said the team had seen some unexpected behavior of the robot, but refused to share any details. He said he did not want to give any clues to the fraudsters.