![](https://sopuli.xyz/pictrs/image/d8e3ca8f-2a39-4c52-b9fa-f937d462822c.png)
![](https://lemmy.ml/pictrs/image/2QNz7bkA1V.png)
AI generated csam is still csam.
Idk, with real people the determination on if someone is underage is based on their age and not their physical appearance. There are people who look unnaturally young that could legally do porn, and underage people who look much older but aren’t allowed. It’s not about their appearance, but how old they are.
With drawn or AI-generated CSAM, how would you draw that line of what’s fine and what’s a major crime with lifelong repercussions? There’s not an actual age to use, the images aren’t real, so how do you determine the legal age? Do you do a physical developmental point scale and pick a value that’s developed enough? Do you have a committee where they just say “yeah, looks kinda young to me” and convict someone for child pornography?
To be clear I’m not trying to defend these people, but it seems like trying to determine what counts legal/non-legal for fake images seems like a legal nightmare. I’m sure there are cases where this would be more clear cut (if they ai generate with a specific age, trying to do deep fakes of a specific person, etc), but a lot of it seems really murky when you try to imagine how to actually prosecute over it.
Yeah, the bank that manages my mortgage has mandatory text message 2fa if you’re on a new computer. And something about Firefox keeps it from remembering my machine, so I have to do the text message 2fa everytime.
Right now it’s working fine, but they had a period of a few months where the text messages would take 10-15min to send after you tried to log in, and the log in attempt would expire after 5 min, making it impossible to log in. All of which could be avoided if they would let me use a 2fa app.