• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AI Topic: AlphaZero, ChatGPT, Bard, Stable Diffusion and more!

Joined
Jun 18, 2021
Messages
2,425 (2.14/day)

This crap just doesn't add up at all. At this point I'm just trying to make sense of everything.

How to aqcuire a very valuable company for next to nothing 101. I mean, I don't know but there's really two possible scenarios:
1- OpenAI board is super super dumb and decided to commit company suicide by overplaying their hand and giving their most valuable assets - the talent working there - away to a willing competitor.
2- Microsoft engaged in some very anti competitive, shady and unethical behaviour by negotiating special deals with OpenAI talent and the boad overreacted and made microsoft's job even easier.

Either way looks like OpenAI is finished. The special thing they had was the talent working there, the tech is not particularly special - a variation of google's transformer model that everyone else can and is replicating - and they simply have no business model to commercialize their LLM and won't be able to support the costs of running the service when Microsoft pulls out.
 
Joined
Dec 29, 2010
Messages
3,614 (0.73/day)
Processor AMD 5900x
Motherboard Asus x570 Strix-E
Cooling Hardware Labs
Memory G.Skill 4000c17 2x16gb
Video Card(s) RTX 3090
Storage Sabrent
Display(s) Samsung G9
Case Phanteks 719
Audio Device(s) Fiio K5 Pro
Power Supply EVGA 1000 P2
Mouse Logitech G600
Keyboard Corsair K95
How to aqcuire a very valuable company for next to nothing 101. I mean, I don't know but there's really two possible scenarios:
1- OpenAI board is super super dumb and decided to commit company suicide by overplaying their hand and giving their most valuable assets - the talent working there - away to a willing competitor.
2- Microsoft engaged in some very anti competitive, shady and unethical behaviour by negotiating special deals with OpenAI talent and the boad overreacted and made microsoft's job even easier.

Either way looks like OpenAI is finished. The special thing they had was the talent working there, the tech is not particularly special - a variation of google's transformer model that everyone else can and is replicating - and they simply have no business model to commercialize their LLM and won't be able to support the costs of running the service when Microsoft pulls out.
There's something going on that feels like some power is trying to keep under wraps. Like no doubt Altman is next level douche going from help the world to max profits w/o being upfront and honest about it. The board though kind of saw this coming a mile away so I wonder or doubt that its Altman's max profit move.

And yea, then again you can't discount some next level shadiness from MS.
 
Joined
Jan 14, 2023
Messages
715 (1.27/day)
System Name Asus G16
Processor i9 13980HX
Motherboard Asus motherboard
Cooling 2 fans
Memory 32gb 5600mhz cl40
Video Card(s) 4080 laptop
Storage 16tb, x2 8tb SSD
Display(s) QHD+ 16:10 (2560x1600, WQXGA) 240hz
Power Supply 330w psu
The current rumor is the CEO is OpenAI fired comes from the link below.

The rumor is that the board of OpenAI feared that AI was moving faster than they thought, they feared for the future of humans. OpenAI might have a huge breakthrough in AGI, Artificial general intelligence. If true, AGI is currently doing grade school level math.
AI has issues with doing math since math requires a correct answer, different from writing.
 
Last edited:
Joined
Feb 18, 2005
Messages
5,562 (0.78/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
Remember when I was talking about how ChatGPT is fundamentally dangerous because it's designed to give you answers that you want to hear? Latest example.

Nature said:
ChatGPT generates fake data set to support scientific hypothesis

Researchers say that the model behind the chatbot fabricated a convincing bogus database, but a forensic examination shows it doesn’t pass for authentic.

The current rumor is the CEO is OpenAI fired comes from the link below.

The rumor is that the board of OpenAI feared that AI was moving faster than they thought, they feared for the future of humans. OpenAI might have a huge breakthrough in AGI, Artificial general intelligence. If true, AGI is currently doing grade school level math.
AI has issues with doing math since math requires a correct answer, different from writing.
Oh man, no matter how many times I see this stupid shit repeated, I never get tired of it.

Let me make something clear, these "researchers" consistently claiming "AI breakthroughs that threaten humanity" don't exist. The "sources" are PR people in the company feeding this nonsense to journalists because they know the "journalists" of today are too stupid and irresponsible to do due diligence, they'll just publish whatever random crap is thrown at them. All OpenAI care about is keeping their name in the news and this is the exact sort of press that accomplishes that, for free.
 
Joined
Jul 15, 2019
Messages
512 (0.28/day)
Location
Hungary
System Name Detox sleeper
Processor Intel i9-7980XE@4,5Ghz
Motherboard Asrock x299 Taichi XE (custom bios with ecc reg support, old microcode)
Cooling Custom water: Alphacool XT45 1080 + 9xArctic P12, EK-D5 pump combo, EK Velocity D-RGB block
Memory 8x16Gb Hynix DJR ECC REG 3200@4000
Video Card(s) Nvidia RTX 3080 FE 10Gb undervolted
Storage Samsung PM9A1 1Tb + PM981 512Gb + Kingston HyperX 480Gb + Samsung Evo 860 500Gb
Display(s) HP ZR30W (30" 2560x1600 10 bit)
Case Chieftec 1E0-500A-CT04 + AMD Sempron sticker
Audio Device(s) Genius Cavimanus
Power Supply Super Flower Leadex 750w Platinum
Mouse Logitech G400
Keyboard IBM Model M122 (boltmod, micro pro usbc)
Software Windows 11 Pro x64
What is the best buy used GPU for stable diffusion? I like the old tesla cards. I can make a custom cooler.

M40 80-100 usd 12Gb GM200
M10 ~150 usd 32gb 4xGM104 (weak GPU-s?)
P4 70 usd 8Gb GP104 (easy to cool)
K80 100 usd 24gb 2xGK210 (cuda version is low?)
or there a few ex mining card P104-100 8Gb 60 usd (cuda capable?)
or is there any better GPU? (~50-150 usd range) older AMD cards (radeon pro, MI...) is also cheap but i don't know the support for the stable diffusion.

i have two old tesla cards but vram and the cuda support is too low i think. (C2075 6gb and K20C 5gb)

The pc for this project:
Dell T5820 (Rebar enabled and turbo unlocked bios)
Xeon 2699v3
4x16Gb ram
Win10 Pro, but linux is possible if needed.
Thanks!
 
Last edited:
Joined
Apr 24, 2020
Messages
2,665 (1.71/day)

Police are complaining about a (now deleted) article that falsely claims a murder happened in town.

It sounds like some lazy journalist clicked a few buttons on ChatGPT to make a story, and it just fully hallucinated a murder wholesale. The story got published for some reason (and then quickly removed). I'm not sure if this rises to the level of libel or slander though, since it'd be hard to say who exactly would sue this newspaper.

Still, it serves as a warning case to the new era we're in. Lazy people are clicking AI buttons they don't understand and just publishing the text without even reading it.
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,689 (2.38/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10

Police are complaining about a (now deleted) article that falsely claims a murder happened in town.

It sounds like some lazy journalist clicked a few buttons on ChatGPT to make a story, and it just fully hallucinated a murder wholesale. The story got published for some reason (and then quickly removed). I'm not sure if this rises to the level of libel or slander though, since it'd be hard to say who exactly would sue this newspaper.

Still, it serves as a warning case to the new era we're in. Lazy people are clicking AI buttons they don't understand and just publishing the text without even reading it.

Yup. Governments need to get IT savvy lawmakers* into gear. This is going to add to an already emerging problem.

*Yeah, I know. IT savvy lawmaker is an oxymoron.
 
Joined
Apr 12, 2013
Messages
7,038 (1.71/day)
The issue is there's no "incentives" for the lawmakers to penalize the perpetrators hard for this, although in the above case it's probably only a minor thing. Now if this were against a billion dollar company heads would obviously roll :slap:
 
Joined
Apr 24, 2020
Messages
2,665 (1.71/day)

1705084871352.png


----------

ChatGPT users ruining Amazon, trying to automatically create titles and descriptions for various products. This is one of the many default error-messages that OpenAI / ChatGPT will emit in some circumstances.

People have been finding a huge uptick in Tweets and other social media where clear bot-behavior is happening.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,717 (1.96/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus Block, HWLABS Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 2x A4x10, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White, Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19044.4046
Benchmark Scores Legendary
Yup. Governments need to get IT savvy lawmakers* into gear. This is going to add to an already emerging problem.

*Yeah, I know. IT savvy lawmaker is an oxymoron.
Getting lawmakers that don't belong in a nursing home would probably help with the IT savvy part.

Term and age limits when?

"AI" real time translation is the best use of it I've found so far. Little chance of catastrophic fk up.
 
Joined
Jun 18, 2021
Messages
2,425 (2.14/day)

Paper arguing they've discovered an algorithm that can differentiate between LLMs and Humans.

Their example on github:
from binoculars import Binoculars

bino = Binoculars()

# ChatGPT (GPT-4) output when prompted with “Can you write a few sentences about a capybara that is an astrophysicist?"
sample_string = '''Dr. Capy Cosmos, a capybara unlike any other, astounded the scientific community with his
groundbreaking research in astrophysics. With his keen sense of observation and unparalleled ability to interpret
cosmic data, he uncovered new insights into the mysteries of black holes and the origins of the universe. As he
peered through telescopes with his large, round eyes, fellow researchers often remarked that it seemed as if the
stars themselves whispered their secrets directly to him. Dr. Cosmos not only became a beacon of inspiration to
aspiring scientists but also proved that intellect and innovation can be found in the most unexpected of creatures.
'''

print(bino.compute_score(sample_string)) # 0.75661373
print(bino.predict(sample_string)) # 'AI-Generated'

0.01% false positive rate you say? You're full of shit...
 
Joined
Mar 4, 2016
Messages
558 (0.18/day)
Location
Zagreb, Croatia
System Name D30 w.2x E5-2680; T5500 w.2x X5675;2x P35 w.X3360; 2x Q33 w.Q9550S/Q9400S & laptops.
The current rumor is the CEO is OpenAI fired comes from the link below.

The rumor is that the board of OpenAI feared that AI was moving faster than they thought, they feared for the future of humans. OpenAI might have a huge breakthrough in AGI, Artificial general intelligence. If true, AGI is currently doing grade school level math.
AI has issues with doing math since math requires a correct answer, different from writing.
Well, they are right...in few years, imagine how much "work force" will need to change because of AI. :cool:
 
Joined
Jan 14, 2023
Messages
715 (1.27/day)
System Name Asus G16
Processor i9 13980HX
Motherboard Asus motherboard
Cooling 2 fans
Memory 32gb 5600mhz cl40
Video Card(s) 4080 laptop
Storage 16tb, x2 8tb SSD
Display(s) QHD+ 16:10 (2560x1600, WQXGA) 240hz
Power Supply 330w psu
Well, they are right...in few years, imagine how much "work force" will need to change because of AI. :cool:
Its already start(ing), amazon is trying out new robots that picks up tots.
 
Joined
May 22, 2024
Messages
227 (3.39/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-48 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Audio Device(s) Sound Blaster AE-7
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
For what it worth, Stable Diffusion 3 is out, from a struggling pioneer of the field and without much of a splash, in a space already saturated by production-grade APIs and hobbyist finetunes of SDXL.

It's currently most noted for generating body horrors on any prompt requiring a smidgeon of anatomical knowledge and atypical pose. The baby has been thrown out with the bathwater at high velocity, in the name of safety.

The effect has been observed since at least SD 2.0, but I assume they had no choice except rendering it worse-than-useless for this purpose, in an official release of model weights to the general public, where legitimate artistic expression and anatomical studies, versus deepfaked AI smut - and worse, unprintable, things - often only differed by end use and distribution, and of course intention.

That, and being able to spell and understand composition prompts relatively well at last. It should work fine, so long as you don't use it to generate anything human-shaped beyond the basic.

P. S. Now I wonder whether they would have been better off, or at least subjected to less ridicule, had they been simply more open about the whole thing upfront, with disclaimers and acknowledgements like "To prevent misuse of the model by bad actors, we excluded potentially objectionable images from the training dataset, and utilized data poisoning to disrupt the utility of potentially harmful concepts. As a consequence, this model cannot reliably render human form in select contexts, and cannot produce accurate non-facial anatomical detail."

That, instead of the too-broad statement in their actual press release:
We believe in safe, responsible AI practices....We have conducted extensive internal and external testing of this model and have developed and implemented numerous safeguards to prevent harms.
 
Last edited:
Joined
Feb 18, 2005
Messages
5,562 (0.78/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
For what it worth, Stable Diffusion 3 is out, from a struggling pioneer of the field and without much of a splash, in a space already saturated by production-grade APIs and hobbyist finetunes of SDXL.

It's currently most noted for generating body horrors on any prompt requiring a smidgeon of anatomical knowledge and atypical pose. The baby has been thrown out with the bathwater at high velocity, in the name of safety.

The effect has been observed since at least SD 2.0, but I assume they had no choice except rendering it worse-than-useless for this purpose, in an official release of model weights to the general public, where legitimate artistic expression and anatomical studies, versus deepfaked AI smut - and worse, unprintable, things - often only differed by end use and distribution, and of course intention.

That, and being able to spell and understand composition prompts relatively well at last. It should work fine, so long as you don't use it to generate anything human-shaped beyond the basic.

P. S. Now I wonder whether they would have been better off, or at least subjected to less ridicule, had they been simply more open about the whole thing upfront, with disclaimers and acknowledgements like "To prevent misuse of the model by bad actors, we excluded potentially objectionable images from the training dataset, and utilized data poisoning to disrupt the utility of potentially harmful concepts. As a consequence, this model cannot reliably render human form in select contexts, and cannot produce accurate non-facial anatomical detail."

That, instead of the too-broad statement in their actual press release:
I honestly don't understand this stupid prudishness from Stability - there is no way that they released an image generation product not knowing it would be used to create porn. This is the internet FFS, it pretty much exists to facilitate the sharing of smut. And if they're willing to hobble their product so it can't generate porn, I can guarantee you their competitors won't - so all Stability has accomplished is to lose marketshare to those competitors. GG WP, not.

Presumably they are concerned about deepfakes and far more objectionable types of porn being generated by their product, and being somehow held liable as a result, but honestly... that particular genie is already waaay out of its bottle. Just like you can't have the internet without porn, you can't have image generation without porn. Humans and sexuality are intrinsically linked, can we maybe start being rational about this?
 
Joined
May 22, 2024
Messages
227 (3.39/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-48 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Audio Device(s) Sound Blaster AE-7
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
I honestly don't understand this stupid prudishness from Stability - there is no way that they released an image generation product not knowing it would be used to create porn. This is the internet FFS, it pretty much exists to facilitate the sharing of smut. And if they're willing to hobble their product so it can't generate porn, I can guarantee you their competitors won't - so all Stability has accomplished is to lose marketshare to those competitors. GG WP, not.

Presumably they are concerned about deepfakes and far more objectionable types of porn being generated by their product, and being somehow held liable as a result, but honestly... that particular genie is already waaay out of its bottle. Just like you can't have the internet without porn, you can't have image generation without porn. Humans and sexuality are intrinsically linked, can we maybe start being rational about this?
Considering that they are already being sued for copyvio, and I think unmentionables in the training dataset of the original SD 1.x, I do understand their behaviour with this release. As to the policy of their real, commercial, competitor, I plead ignorance.

But yes, given how the model otherwise does have merit, releasing the model weights into the wild will mean that someone will finetune all that safety engineering away within the month, if the model could be finetuned at all. It is Internet, after all. Sufficiently powerful models - models that can spell and do composition - had previously been locked behind commercial API with all their content policy and terms of service. All Stability gained here is plausible deniability, if it could be finetuned.

I have no idea how this one will go down, but it won't be good.
 
Last edited:
Joined
Apr 24, 2020
Messages
2,665 (1.71/day)
I honestly don't understand this stupid prudishness from Stability - there is no way that they released an image generation product not knowing it would be used to create porn. This is the internet FFS, it pretty much exists to facilitate the sharing of smut. And if they're willing to hobble their product so it can't generate porn, I can guarantee you their competitors won't - so all Stability has accomplished is to lose marketshare to those competitors. GG WP, not.

Presumably they are concerned about deepfakes and far more objectionable types of porn being generated by their product, and being somehow held liable as a result, but honestly... that particular genie is already waaay out of its bottle. Just like you can't have the internet without porn, you can't have image generation without porn. Humans and sexuality are intrinsically linked, can we maybe start being rational about this?

There's this white-market porn that Mastercard / Visa are willing to serve, grey-market where Mastercard / Visa thinks is too risky (but isn't technically illegal yet), and then the straight up black-market FBI-will-start-investigating-you porn variety.

When you consider that in today's world, "porn" includes CSAM, and that CSAM is actively used to attack public websites (ex: if I were a dick and wanted to harm TechPowerUp, I'd create a bunch of accounts and start posting CSAM to mess with the moderators), then it makes more sense. The way CSAM is tracked is by hashes and fingerprints, they don't want humans looking at the stuff on a regular basis so its a lot of automated processes. So if you have a new "porn generator" that could potentially create CSAM content, then you need to participate in the database or get fucking nuked by the powers that be.

Otherwise, I'd take all those .jpgs and start posting them to TechPowerUp (or whatever website you like), just to shut down other websites. And if all those .jpegs aren't in the hashing databases or whatever thata various websites use, then that means manual curation. Ex: Imagine how the moderators would feel if they had to spend ~2 or 3 hours manually deleting that stuff. Would I, the hypothetical troll, have gotten a moderator at work and thus in trouble with their IT department? Etc. etc. etc.

So yeah, the black-market porn / CSAM stuff is extremely toxic. I would assume that this generative AI would be used to create CSAM, and no, no one actually wants loads of AI-CSAM to start proliferating the internet. That'd just wreck moderators.

-----------

If there were assurances that the porn would "only" be of the white-market variety that everyone is cool with, then yeah, no one would have a problem with it. But **every** pornographic website has to deal with the inevitable rise of the CSAM community intruding, spreading that stuff, and then... well... things get really political real quickly.

The Grey-market stuff (ie: Visa/Mastercard doesn't like it, but FBI won't investigate you over it, because its not illegal yet) is also troublesome. Because that's sufficient to get demonetized in practice even if it isn't illegal. Visa/Mastercard has stricter standards because they want to be accepted by more than USA, but also around the world. So Visa/Mastercard pretty much plays by the "most conservative" rules across the spectrum of countries they serve. Like, illegal porn by Australia-standards would likely be banned by Visa/Mastercard (even if its legal in USA). And vice versa, illegal in the USA means its banned (by Visa/Mastercard) in Australia (even if its illegal in Australia). This grey-market stuff is a similar issue to the CSAM stuff, just led by the payment processor companies instead of any particular legal entity.
 
Last edited:
Joined
Feb 18, 2005
Messages
5,562 (0.78/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
There's this white-market porn that Mastercard / Visa are willing to serve, grey-market where Mastercard / Visa thinks is too risky (but isn't technically illegal yet), and then the straight up black-market FBI-will-start-investigating-you porn variety.

When you consider that in today's world, "porn" includes CSAM, and that CSAM is actively used to attack public websites (ex: if I were a dick and wanted to harm TechPowerUp, I'd create a bunch of accounts and start posting CSAM to mess with the moderators), then it makes more sense. The way CSAM is tracked is by hashes and fingerprints, they don't want humans looking at the stuff on a regular basis so its a lot of automated processes. So if you have a new "porn generator" that could potentially create CSAM content, then you need to participate in the database or get fucking nuked by the powers that be.

Otherwise, I'd take all those .jpgs and start posting them to TechPowerUp (or whatever website you like), just to shut down other websites. And if all those .jpegs aren't in the hashing databases or whatever thata various websites use, then that means manual curation. Ex: Imagine how the moderators would feel if they had to spend ~2 or 3 hours manually deleting that stuff. Would I, the hypothetical troll, have gotten a moderator at work and thus in trouble with their IT department? Etc. etc. etc.

So yeah, the black-market porn / CSAM stuff is extremely toxic. I would assume that this generative AI would be used to create CSAM, and no, no one actually wants loads of AI-CSAM to start proliferating the internet. That'd just wreck moderators.

-----------

If there were assurances that the porn would "only" be of the white-market variety that everyone is cool with, then yeah, no one would have a problem with it. But **every** pornographic website has to deal with the inevitable rise of the CSAM community intruding, spreading that stuff, and then... well... things get really political real quickly.

The Grey-market stuff (ie: Visa/Mastercard doesn't like it, but FBI won't investigate you over it, because its not illegal yet) is also troublesome. Because that's sufficient to get demonetized in practice even if it isn't illegal. Visa/Mastercard has stricter standards because they want to be accepted by more than USA, but also around the world. So Visa/Mastercard pretty much plays by the "most conservative" rules across the spectrum of countries they serve. Like, illegal porn by Australia-standards would likely be banned by Visa/Mastercard (even if its legal in USA). And vice versa, illegal in the USA means its banned (by Visa/Mastercard) in Australia (even if its illegal in Australia). This grey-market stuff is a similar issue to the CSAM stuff, just led by the payment processor companies instead of any particular legal entity.
None of that's relevant, though. Stability isn't training their database on CP/CSAM (well, I bloody well hope they aren't) so the only way they could be exposed to such material is if someone uploads one of those images as a source... in which case they would match it to their CSAM DB and refuse to do anything with it (and hopefully send the cops to the uploader's house). That seems about the same level of care that any ordinary image host would be expected to provide, so I don't see how CSAM would be any more of a concern for Stability than for anyone else. In other words, I'm really not seeing how or why CSAM could be a strong reason for this company to censor their dataset.
 
Joined
Apr 24, 2020
Messages
2,665 (1.71/day)
Stability isn't training their database on CP/CSAM (well, I bloody well hope they aren't) so the only way they could be exposed to such material is if someone uploads one of those images as a source...

You don't need to have "good looking" CSAM for it to cause issues. You just need to output material that qualifies as CSAM.

Lobotomizing your AI so that its bad with humans in general sounds like a step too far, but think about it. If you can't make any human look good, the system as a whole cannot be used as CSAM at all. Its a very conservative approach but its what I expect is driving the decision making here.
 
Joined
May 22, 2024
Messages
227 (3.39/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-48 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Audio Device(s) Sound Blaster AE-7
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
None of that's relevant, though. Stability isn't training their database on CP/CSAM (well, I bloody well hope they aren't) so the only way they could be exposed to such material is if someone uploads one of those images as a source... in which case they would match it to their CSAM DB and refuse to do anything with it (and hopefully send the cops to the uploader's house). That seems about the same level of care that any ordinary image host would be expected to provide, so I don't see how CSAM would be any more of a concern for Stability than for anyone else. In other words, I'm really not seeing how or why CSAM could be a strong reason for this company to censor their dataset.
You don't need to have "good looking" CSAM for it to cause issues. You just need to output material that qualifies as CSAM.

Lobotomizing your AI so that its bad with humans in general sounds like a step too far, but think about it. If you can't make any human look good, the system as a whole cannot be used as CSAM at all. Its a very conservative approach but its what I expect is driving the decision making here.
That's the crux. Current text-to-image models can extrapolate, and these capabilities get considerably more... let's just say, dual-use, with improved compositional capabilities mostly provided by an advanced encoder stage. Such capabilities existed years ago - I think prototype Imagen could do it, it's the first model I remember to have seen to spell reasonably well - but as far as I'm aware it did not make it into any well-known open-weight model, until now.

Getting back to the issue at hand, if you could - in that instance described in the Ars article - produce reasonable pictures of people lying on grass, you could reasonably expect the model to be able to produce <any person or combination thereof of any appearance and physique> lying on <anything physical or metaphorical in any context>. I'll just leave it at that. It's not a comfortable topic.
 
Last edited:
Joined
May 22, 2024
Messages
227 (3.39/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-48 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Audio Device(s) Sound Blaster AE-7
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
And now, one monthly calendar roll-over later, exactly two finetunes of Stable Diffusion 3 appeared to show up on HF. Only one of them looked viable, but still proof-of-concept-y.

I take back my prediction of how at least someone was going to finetune its safety engineering away within the month. At least nothing showed up there. Meanwhile, SDXL LORAs, finetunes, and mixes still popped up left and right.

Seems like the community is more or less giving this one the pass, like they did with SD 2.0, and that's when SD 2.0 is not nearly as badly limited. Probably not surprising either, given how SDXL is actually larger than the currently released SD3-medium in terms of parameter count. Whatever one could say about Stability failing to monetize their models and efforts, and pending something unexpected happening, this looked to be how it would end, and certainly not with a bang.
 
Top