12

There is a case currently about computers pretending to be humans and "listening" to spotify streams to inflate the figures for specific artists. This is described as fraud.

There are other cases where computers are pretending to be humans. This includes the lengths AI crawlers are going to to crawl sites that the owners are trying to stop them accessing and agenic AI clicking on the 'I am not a robot' button while saying 'This step is necessary to prove I'm not a bot'. This causes the target harm, as laid out by the wikimedia foundation.

From my non-lawyer perspective these seem very similar. What are the considerations in assessing if such behaviour is fraud?

The linked case is in California, but answers for any jurisdiction would be interesting.

1
  • 1
    Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on Law Meta, or in Law Chat. Comments continuing discussion may be removed. Commented Nov 6 at 15:17

6 Answers 6

18

Here's the statutory definition of fraud by misrepresentation from Section 2 of the Fraud Act 2006:

(1) A person is in breach of this section if he — (a) dishonestly makes a false representation, and (b) intends, by making the representation — (i) to make a gain for himself or another, or (ii) to cause loss to another or to expose another to a risk of loss.

(2) A representation is false if — (a) it is untrue or misleading, and (b) the person making it knows that it is, or might be, untrue or misleading.

(3) “Representation” means any representation as to fact or law, including a representation as to the state of mind of — (a) the person making the representation, or (b) any other person.

(4) A representation may be express or implied.

(5) For the purposes of this section a representation may be regarded as made if it (or anything implying it) is submitted in any form to any system or device designed to receive, convey or respond to communications (with or without human intervention).

The prosecution would have to establish that all of the elements in the statutory definition are met.

Taking the Spotify example, when the user streams a song, it could be said that are impliedly representing to Spotify that they are actually listening to the song when in fact they are just artificially inflating listening numbers and silently discarding the stream. The representation is untrue or misleading, and the person knows that it is. In doing so they intend to make a gain for themselves or another (the artist).

1
  • Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on Law Meta, or in Law Chat. Comments continuing discussion may be removed. Commented Nov 15 at 15:09
11

Spotify pays artists based on the number of streams of their music.

If there were a fixed rate per stream, this wouldn't be a problem for the artists. Fake streams would cause Spotify to pay those artists more, but it wouldn't affect other artists. It would obviously be in Spotify's best interests to identify fake streams and not pay for them.

But their actual model is pro-rata. There's a fixed pool of payment money, and each artist gets a share based on their percentage of all the streams. As a result, if some artists artificially increase their streams, they get revenue that should have gone to other artists who aren't padding their streams.

Streams are supposed to reflect the number of people actually listening to the music. Bots that "pretend" to be humans violate that expectation.

In terms of online fraud, this is comparable to vendors in online marketplaces who post fake positive reviews of their own products, or negative reviews of a competitor.

0
7

When a human uses a computer to crawl a site, and the publisher tries to stop them, the true question is whether they are entitled to crawl the site or not. This is dependent on various factors which (at least here in the US) remain somewhat unsettled. The deception is just part of the game of cat-and-mouse around that.

By contrast, when a human uses a computer to fake streaming plays, it's because that human gets paid for those plays. As part of the agreement by which they get paid, they will most likely have signed some sort of contract agreeing that their OWN plays don't count towards what they get paid, in the same way that websites selling ad space (for which they get paid) agree not to click the ads on their own site. The underlying fraud is the intentional false claim that people are listening to their music, for which they will get paid. Whether they do that fraud with a program that throws the stream away, or by playing their own music in their empty house, is just a matter of scale (and of how easy the fraud is to prove.)

1
  • Spotify EULA explicitly forbids crawling. (However, it's Spotify who's being sued, rather than any crawling party. Also, crawling isn't synonymous with every automated access.) Commented Nov 7 at 11:09
2

In all jurisdictions I'm aware of,

Financial benefit from misrepresentation

If this was merely for information, it would only be annoying. In most cases there wouldn't be anything illegal about this.

But in the case of Spotify, earnings are paid per play. If the number of plays is manipulated, the payments to that artist are manipulated - and this is fraud.

1
  • This sounds like Spotify would have a cause of action against the fraudster, not other artists as in the existing case. Commented Nov 7 at 15:16
0

Your linked article doesn't specify the exact law being infringed here, but if I had to guess I suspect California Penal Code PEN § 532 (a) covers it (text emboldened per my own interpretation of the salient bits):

(a) Every person who knowingly and designedly, by any false or fraudulent representation or pretense, defrauds any other person of money, labor, or property, whether real or personal, or who causes or procures others to report falsely of his or her wealth or mercantile character, and by thus imposing upon any person obtains credit, and thereby fraudulently gets possession of money or property, or obtains the labor or service of another, is punishable in the same manner and to the same extent as for larceny of the money or property so obtained.

My understanding is this could be prosecuted on either the:

  • fraudulent representation causing others to lose a deserved proportion of the money they have a right to via market representation
  • fraudulent representation causing the person being supported to gain more money than they have right to from greater market representation (possibly this is exaggeration, and therefore false reporting, of their mercantile character?)
-2

While the AI Act does not address fraud directly, it does require transparency in many cases. Article 50 (2) says

AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated

And 50(4)

Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.

So pretending to be a human, without disclosing that this is actually an AI system, might be in violation of EU AI Act

4
  • 3
    Clicking an "I am not a robot" button does not involve "generating synthetic audio, image, video or text content" or "generat[ing] or manipulat[ing] image, audio or video content". Commented Nov 6 at 23:39
  • 1
    @GlennWillen Your comment raises a very interesting point and I suspect it will be at the heart of some dispute in the near future. For centuries humans have authenticated documents by scribbling a "unique" signature at the bottom of the page. I'm sure everyone would agree that this signature is an "image or text content". The idea of a "no-captcha captcha" is to simply ask the user to click a box, and then analyse the mouse movements of the click to decide whether this was a human click or not. (1/2) Commented Nov 7 at 13:43
  • 1
    (2/2) This "uniquely human mouse-click" is pretty similar to a signature or fingerprint, and although it does not seem to fit "audio, image, video or text content" to the letter, in the sense that it's never displayed on screen, the mouse movement is still some kind of content, although it's a non-displayed content. And the AI passed the "I'm human" test by generating a synthetic human-looking fake mouse movement. So it is a generated synthetic content, albeit not clearly an image or video. Commented Nov 7 at 13:46
  • 1
    I'm not saying you couldn't pass a law covering something like that, but I would be very disappointed if a judge deliberately misinterpreted the quoted law to cover it. Commented Nov 7 at 17:24

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.