Monday, September 6th 2010

Picture of AMD ''Cayman'' Prototype Surfaces

Here is the first picture of a working prototype of the AMD Radeon HD 6000 series "Cayman" graphics card. This particular card is reportedly the "XT" variant, or what will go on to be the HD 6x70, which is the top single-GPU SKU based on AMD's next-generation "Cayman" performance GPU. The picture reveals a card that appears to be roughly the size of a Radeon HD 5870, with a slightly more complex-looking cooler. The PCB is red in color, and the display output is slightly different compared to the Radeon HD 5800 series: there are two DVI, one HDMI, and two mini-DisplayPort connectors. The specifications of the GPU remain largely unknown, except it's being reported that the GPU is built on the TSMC 40 nm process. The refreshed Radeon HD 6000 series GPU lineup, coupled with next-generation Bulldozer architecture CPUs and Fusion APUs are sure to make AMD's lineup for 2011 quite an interesting one.

Update (9/9): A new picture of the reverse side of the PCB reveals 8 memory chips (256-bit wide memory bus), 6+2 phase VRM, and 6-pin + 8-pin power inputs.
Source: ChipHell
Add your own comment

118 Comments on Picture of AMD ''Cayman'' Prototype Surfaces

#51
buggalugs
haha Cheezburger give it up man. No 512bit memory bus for you!!
Posted on Reply
#52
LAN_deRf_HA
That speed of GDDR5 has existed for awhile, it just wasn't cost effective to immediately start mass producing it. It is now. AMD is a company intent on making money, not running around like a chicken with it's head cut off. That's why the 6 series flagship will have a 256 bit bus, a 512 bus is moronic. It will raise the price, reduce sales, and not increase profit margin. Not to mention provide far more bandwidth than the core could utilize. An utter waste; your irrational dream is.
Posted on Reply
#53
cheezburger
crazyeyesreaper^ source material or i call fud on the employee treatment bs

true and i mentioned that already ati was behind from the 6000 series all the way up till nvidias gt 200 series thats 5 product cycles yet ATi is still here for the most parts just as nvidia will be

and i still call bullshit on the lunch vs product if nvidia spent more on product developrment they wouldnt need a gpu that uses 320w to rival an ati gpu that uses 212w

also dosent matter if the gt200 has GDDR5 or not why because performance wouldnt benefit in the least.

also again 512bit bus is extremely costly and the extra bandwidth would do NOTHING to make the gpu faster a gpu is like a whole package 512bit bus gives more bandwidth but if the GPU cant make use of what it already has giving more dosent do a damn thing.

and it dosent matter much a gtx 460 still uses more power then a 5850 and the 1 gig variants use nearly as much power as a 5870. but are still slower in the respective stock configurations.

Lets face a few facts none of this really means jackshit

currently Nvidia is behind in market share they were 8 months late to market with anything DX11 and they still have yet to finish there DX11 lineup ATi is already moving onwards with there 2nd gen DX11 cards and in the meantime it allows them to test parts of there next series the hd7000 meaning there basically getting real time performance estimates on parts of a future architecture while nvidia is still trying to finish the 400 series product lineup

and again 512bit bus wont do a god damn thing ppl said the same shit about the 5870 being memory bandwidth starved and its not its the ROP count so i highly doubt the 6000 series needs any more bandwidth then 5000 series provides but it gets it anyway in terms of faster memory speeds. and again we have no concrete info so basically i see a bunch of assumptions based on FUD that has no real source.
source? go google it...

fermi consume 320w was because it added a lot of non gaming particle(general computing) which waste hugely on die size. if they could take it off the power rate will drop 30%....

tell me why 512bit is cost a lot first. it might cost a lot back in r600 with pathetic 80nm fab but today this will be solve thanks to 40nm process mostly because r600 only had 16 rops which most of bus width were wasted. but today a cypress has 32rops thing goes different. each rops only share 8bit per length. but 32rop is still enough to fit in 256bit bus(barely...). but what happen if a gpu that has 64rop? it will cause bottleneck in communication between gpu and ram. it doesn't matter overall bandwidth if most of data stuck at rops due to the narrow bus width. for example a cypress xt suppose to be double of rv770xt in every spec but end up only 55% increase in performance. while rv770xt is about double of rv670 in every bench? why cause such difference? the answer, the bus width can't feed enough of data length to GPU rops/shaders. a 512bit is necessary for future gpu that have more rops/shader
buggalugshaha Cheezburger give it up man. No 512bit memory bus for you!!
source? :D
LAN_deRf_HAThat speed of GDDR5 has existed for awhile, it just wasn't cost effective to immediately start mass producing it. It is now. AMD is a company intent on making money, not running around like a chicken with it's head cut off. That's why the 6 series flagship will have a 256 bit bus, a 512 bus is moronic. It will raise the price, reduce sales, and not increase profit margin. Not to mention provide far more bandwidth than the core could utilize. An utter waste; your irrational dream is.
again tell me why 512bit cost a lot? because of bad experience from r600?

do you think a a $599 card for high end is expansive?
Posted on Reply
#54
LAN_deRf_HA
cheezburgeragain tell me why 512bit cost a lot? because of bad experience from r600?

do you think a a $599 card for high end is expansive?
I sweep away most of your points then you respond by asking me questions not even related to what I said? I didn't say 512 bit costs a lot, I said it costs more. And what on earth are you trying to say with the second question? For someone so into AMD you don't seem to think highly of them. No way are they going to do something as moronic as release a $600 single gpu card. The 6 series is meant to replace the 5 series, not coexist at some absurd price point above it. Your logic lacks logic.
Posted on Reply
#55
crazyeyesreaper
Not a Moderator
its simple

larger bus means more pins more pins means more complex the more complex the more likely of failure higher failure rate of a die means lower yield lower yield means less profit

also the pins can only be so small before there to brittle to solder to a PCB meaning a 512bit = more pins and the pins can only get so small before they cant be shrunk down meaning 512bit takes more space

so all a 512bit does effectively for todays GDDR5 is

*give bandwidth the gpu cant use
*makes a more complex PCB design
*has a higher risk of failure due to complexity

source read this article and learn something
www.extremetech.com/article2/0,2845,2309870,00.asp\

a perfect example of why 512isnt needed

4850 vs 4870 despite double the bandwidth from GDDR3 to GDDR5 the 4870 is only 20% faster the ram nor ram speed made the difference it was the higher core clock as the to cards used the same gpu just different ram to no real benefit.

same can be said of 256bit vs 512bit a 4870 could have a 512bit interface but if it uses GDDR3 its only equal to GDDR5 at 256bit what this means is the 512bit was more complex to produce but had no benefit over the cheaper bus with faster lower voltage memory

what this means is even if you double the bus width and the bandwidth it wont make a 6870 any faster to warrant the cost of the design. now you can have your opinion but last i checked you werent an Engineer working for Nvidia or ATi and seem to have no understanding of this subject in anyway to form a decent and well informed opinion on it nor are you able to see the big picture in the design of the GPU


"Just because you can dosent mean you should" is what comes to mind in terms of 256bit vs 512bit

another way to see it is you lose 10% overall wafer space so you get 10% less GPUs per wafer but gain 5% in performance that 5% gain dosent make you more money because the 5% gain from 256bit to 512bit is something they can get in a cheaper more cost effective way. So basically if a wafer provides 100gpus at 256bit and 95% performance and 512bit offers 90gpus at 100% performance if we count that in terms of products to market sure it might only be $20 on the manufacturing end but if those are wafers of $700 gpus those extra 10gpus just earned said company an extra $7000 per 100gpus at 256bit vs 90 at 512bit thats why you wont see 512bit.

becuase 1,000,000 gpus vs 900,000 if all of them are full functional is a huge profit difference for companies not to mention the 1m gpus at 256bit will most likely have higher yield then 512bit means in terms of usable GPUs

the 1m might be 800,000 that can be used but the 900k might only be 600k usable so by the time you run the numbers your precious 512bit costs millions in profit. These companies are not here to hold your hand thats your mothers job there here to make money 512bit wont make them any more money then 256bit it infact costs more and since ATi/AMD is trying to maintain a Positive cashflow means there going to take the tiny insignificant 1 fps loss in crysis between 256 and 512 and reap an extra $20 instead

note my math is hypothetical i dont know the actual manufacturing costs of the GPU itself and all components but im sure most around here will agree the logic itself is what matters and its solid
Posted on Reply
#56
btarunr
Editor & Senior Moderator
BazookaJoeI thought AMD had retired the"ATI" Brand.

If this is a new card would it still be branded "ATI" ?
Prototype may have been made long before AMD announced ATI's brand dissolution. It's a prototype, and ATI has been using that exact fan since Radeon HD 2900 Series.
Posted on Reply
#57
crazyeyesreaper
Not a Moderator
awww shucks i was hoping BTA would curb stomp my posts and make the epic you cant deny my logic post to save us all from the 512bit discussion
Posted on Reply
#58
xtremesv
AMD Cayman will be a Cypress refreshed to regain the most powerful GPU crown again, I don't expect revolutionary architecture changes, maybe something creative with the tessellator(s?). I bet that Cayman will have around 2400 stream processors, 100 TU's and 48 ROP's with a GPU clock between 850 and 950 MHz.

This thread turned into a memory bus discussion. IMHO, the bus doesn't matter, you don't get anything having for example a 512 bit bus with a 400MHz DDR2 memory, what matters is the GDDR5 clock in this case, so AMD can stick to a cheaper 256 bit bus and clock the memory higher. Then I'd say GDDR5 clocked to 1500 MHz on a 256 bit bus (192 GB/s) is my safe bet.
Posted on Reply
#59
cheezburger
crazyeyesreaperits simple

larger bus means more pins more pins means more complex the more complex the more likely of failure higher failure rate of a die means lower yield lower yield means less profit

also the pins can only be so small before there to brittle to solder to a PCB meaning a 512bit = more pins and the pins can only get so small before they cant be shrunk down meaning 512bit takes more space
well i'd only reply to this part. because the rest were pretty much the same argument...

larger bus require more pin on the bga board that contain gpu. indeed but it doesn't really give any further die increase and nothing to do with die/wafer. just the board become more complex and more layer for pcb board and increase size of gpu footprint, but not die size. the source also didn't mention it will cause lower yield on gpu die. mostly it would just cause difficulty to graphic card manufacturer to design the board. amd didn't lose any profit because this is just for high end part exclusively. and eventually neither nvidia and amd can go without bigger bus in future.
it is you lose 10% overall wafer space so you get 10% less GPUs per wafer but gain 5% in performance that 5% gain dosent make you more money because the 5% gain from 256bit to 512bit is something they can get in a cheaper more cost effective way. So basically if a wafer provides 100gpus at 256bit and 95% performance and 512bit offers 90gpus at 100% performance if we count that in terms of products to market sure it might only be $20 on the manufacturing end but if those are wafers of $700 gpus those extra 10gpus just earned said company an extra $7000 per 100gpus at 256bit vs 90 at 512bit thats why you wont see 512bit.
then remove some unnecessary design such as 5D shader and stop adding more shader like what they did in r700...they were wasted far more die space by putting these additional float point feature(again like fermi....for that stupid general compute and that stupid fold@home?) if they cut it off they could have save a lot of die space to stuff more feature for pure performance....though i think they did make some new tweak on southern islands by tried to remove as much of those useless feature and bring back what graphic card suppose to be- rendering graphic.. r600's massive shader architecture was one of worst way to improve performance.
xtremesvAMD Cayman will be a Cypress refreshed to regain the most powerful GPU crown again, I don't expect revolutionary architecture changes, maybe something creative with the tessellator(s?). I bet that Cayman will have around 2400 stream processors, 100 TU's and 48 ROP's with a GPU clock between 850 and 950 MHz.

This thread turned into a memory bus discussion. IMHO, the bus doesn't matter, you don't get anything having for example a 512 bit bus with a 400MHz DDR2 memory, what matters is the GDDR5 clock in this case, so AMD can stick to a cheaper 256 bit bus and clock the memory higher. Then I'd say GDDR5 clocked to 1500 MHz on a 256 bit bus (192 GB/s) is my safe bet.
48rops means "half note" design....that is why fermi fall so bad
Posted on Reply
#60
crazyeyesreaper
Not a Moderator
and sure they can go without it theres GDDR GDDR2 GDDR3 GDDR4 GDDR5 whats to say GDDR6 dosent double the bandwidth again much like GDDR3 vs GDDR5 hmm??? eitherway dosent matter im walking away from this you can believe in 512bit all u want but were dicussing HD6800 series and it wont have 512bit

afterall why would they make the 6000 series even more expensive to produce when they will most likely only be around for 10months - a year much like the 5k cards before there replaced by the 7000 series which will be an all new architecture from the ground up it makes no sense to have a stop gap gpu cycle cost any more then is needed to hold there market share and keep Nvidia in check
Posted on Reply
#61
Unregistered
wow, chill out guys, it doesn't matter if it 512 bit or even 256 bit, the matter is if the card play crysis in 100 + FPS that was the most important thing, and not cost arm and leg.


wow, i never expect this card com out so quickly,
#62
cheezburger
crazyeyesreapera it makes no sense to have a stop gap gpu cycle cost any more then is needed to hold there market share and keep Nvidia in check
that is exactly what ati was thinking when they thought their r300 would last forever and people would satisfy the current performance and not going further. nvidia may stuck a bit but it will come back prove more value than 3rd rate company like amd as people like you will stick and enjoy that little success.. in this rate it's likely will happen just like old day. it will be a big checkmate if nvidia make a 512bit 64/128rops and 512 cuda while remove all of GPGPU feature. it happen before(nv40 was said to be nothing but it gave a big hit to ati when it released )

an ancient wisdom "if you can't make history, you will be abandon by history"

PS: oh just right after the discussion when i was about to bring evidence of 512bit from wiki and somebody just had to erase it and mess the whole article. wow. crazy if you are part of hack group you are in seriously trouble. wikipedia is under investigate. didn't realize someone just cant take the truth lol
Posted on Reply
#63
mastrdrver
Dude, I'll put down money that no single gpu cayman card will come with a 512 bit bus. They may come with something more than 256 (which I very highly doubt) but I would bet a large amount of money that no card will come with a 512 bit memory bus.

AMD has said several times why they won't do it. You should read the two Anandtech articles about the 4 and 5 series if you havn't.
Posted on Reply
#64
inferKNOX
buggalugsOn a different topic with all those connections it looks like we might have eyefinity without the need for active adapters.
How do you figure that?
To use 3 monitors, you'd still have to convert the mini-DP to DVI, since using 2 DVI ports disables the HDMI, leaving only the 2 DP ports.
Posted on Reply
#65
meran
it will be 256bit with 6400mhz gddr5 so stop arguing,nvidia went with 384bit cuz they cant make gddr5 touch 5000 also alot of people think that more bit is better but the real thing is more memory speed is allot better ,u get rid of:
1: 2x wiring on the PCB
2: 2x memory chips
3: get rid of EMI,which will make u hit higher clocks,
4: so its allot cheaper and less complex PCB
see why the 460 can hit higher memory speed than 480 cuz its less complex con the memory controller side and the PCB side
and if they make 384bit it wouldn't hit 6400mhz easily would it ??
Posted on Reply
#66
KainXS
AMD is highly against increasing the memory bus on their cards, which is why they waited till the 5870 to do it, while nvidia was doing it 3 series before them, they had no choice, and they're not going to do it again for a while, I would rather want to know more about the architecture itself and know whether or not more tesselators were added than sit here and whine about the memory bus, without knowing anything about the architecture, talking about needing a bigger memory bus dosen't really mean much at this point.
Posted on Reply
#67
meran
KainXSAMD is highly against increasing the memory bus on their cards, which is why they waited till the 5870 to do it, while nvidia was doing it 3 series before them, they had no choice, and they're not going to do it again for a while, I would rather want to know more about the architecture itself and know whether or not more tesselators were added than sit here and whine about the memory bus, without knowing anything about the architecture, talking about needing a bigger memory bus dosen't really mean much at this point.
im with ya :toast:

check this out if its real it will:nutkick: nvidia:D
forums.anandtech.com/showpost.php?p=30402647&postcount=497
Posted on Reply
#68
cheezburger
meranit will be 256bit with 6400mhz gddr5 so stop arguing,nvidia went with 384bit cuz they cant make gddr5 touch 5000 also alot of people think that more bit is better but the real thing is more memory speed is allot better ,u get rid of:
1: 2x wiring on the PCB
2: 2x memory chips
3: get rid of EMI,which will make u hit higher clocks,
4: so its allot cheaper and less complex PCB
see why the 460 can hit higher memory speed than 480 cuz its less complex con the memory controller side and the PCB side
and if they make 384bit it wouldn't hit 6400mhz easily would it ??
higher clockrate and even higher clockrate....where did i heard that before? oh netburst from intel! do you really think clockrate is important? no it is IPC and DPR that are important. fermi fails was because it had added too many feature on scientific calculation and general computing.(yeah suck that fold@home, most of high end user don't even care how many people die in cancer every year...) a 2ghz radeon with 12gt GDDR ram does not perform any better than a completely spec of fermi II that is not no longer a GPGPU. from die size to layout/wiring cost radeon don't have any more advantage. may nv would cost 20~30 dollars more may be? as far as for ram and 384bit if they can have a complete die shrink and get rid off any unnecessary particle like general computing they would have a lot more head room for these DDR speed. but again 1.6ghz or above speed is not that necessary. however in some game like crysis/stalker it take advantage on bus/rop than on shaders/ram speed. most of amd fan don't know the truth while just kept blame crytech for optimized game engine for nv card. but that is how radeon card is today. high core frequency while having poor instruction per cycle and tiny cache, relatively small rops and inefficient 5D shader that took a lot of die space. hell! 4870 is still fall behind of 8800gts 640 in crysis and fear,quake war. in most of test bench clockrate do little to nothing to real time gaming and shader only do well when it's right architecture(like g80) or game that's well optimized to buzzard shader like amd's 5D (like hawk and unreal tournament ). may be only 3dmark favor higher clockrate!?

this had been said so many time that cayman is for high end market. so so it has to be 64rops~128rops/3D~4Dshader/512bit bus and it will cut off many of unused shader off from cypress design. PCB layout and wiring cost would not be consider. unless you want to buy a crappy "high end" like 4870 that couldn't even compete 8800gtx in 70% of games. these useless shader only work on game that based on unreal 3 engine, but again unreal 3 engine is shit and only exclusive for console. like i said before if they get rip of 5D shader and make it 4D or even 3D shader and turn it to pure gaming card will save plenty of space.

gtx 460 hit higher ram clock was because they want to grab back some market share which force them to do so, and mostly done by overclock and overvoltage because they can't get faster ram. this is not because of "cuz its less complex con the memory controller side and the PCB side " it was because nvidia hasn't been license yet to integrate faster GDDR5 ram and yet amd and hynix holding the GDDR5 patent as long as amd no authorize, nvidia/gtx480 will never get any faster ram. and these memory controller/pcb layout cost only be consider in mid range board so obviously GTX460 is doing a correct move. but it's not correct if on upcoming gtx 485.... if hynix can license nv with faster ram like amd and with its wider bus fermi will destroy any future line of radeon if that are still continue that pathetic r600 design.

if a 6770(barts) is a revolutionary design then why not cayman be a revolutionary design as well? unless you telling me they are enjoy their success and try get more milk than put out better product like good old k8/r300 day. no wonder amd/ati are always be a secondary company...
Posted on Reply
#69
buggalugs
inferKNOXHow do you figure that?
To use 3 monitors, you'd still have to convert the mini-DP to DVI, since using 2 DVI ports disables the HDMI, leaving only the 2 DP ports.
Not neccesarily, thats just the way AMD designed it for the 5xxx series doesnt mean its impossible to do.

Sapphire have a special 5770 flex model that can use 3 DVI monitors and no active adapters.

www.pccasegear.com/index.php?main_page=product_info&cPath=193_962&products_id=15368
Posted on Reply
#70
crazyeyesreaper
Not a Moderator
lol i smell nvidia fanboy here lets face it the green team was late there still late they wont compete till 28nm the 6000 series gives ati the lead for another year meaning 2 years where ATi now AMD has been top dog in terms of GPUs you can spin it however you want it dosent change the fact even the stripped down gtx460 for gaming STILL consumes more power then a 5850 and close to a 5870 but is 20-40% slower in single card configs and the 480 while being the fastest single gpu card is still only 2nd fastest in terms of single card overall

and you seem to like talking about r 600 and r 300 what about the g92 lol been in use for nearly 4 years with no re improvement where as ati has scaled there design to offer better performance you can argue your points all you want because fact is the biggest baddest gpu nvidia had which was the 512 shader gtx 480 was only 5% faster then the 480 shader current high end both companies have room for improvement. and how is the r600 design pathetic? last i checked the 5850 5870 and 5970 are highly competitive compared to nvidias offerings and in general use nearly 35-40% less power to do so and if its truly fail then why did Nvidia lose so much market share to this pathetic design?
Posted on Reply
#71
buggalugs
=cheezburger;. no wonder amd/ati are always be a secondary company...
AMD has been on top since the 5xxx series came out in 2009. They've sold millions of them and they continue to sell like hotcakes.
Posted on Reply
#72
cheezburger
crazyeyesreaperlol i smell nvidia fanboy here lets face it the green team was late there still late they wont compete till 28nm the 6000 series gives ati the lead for another year meaning 2 years where ATi now AMD has been top dog in terms of GPUs you can spin it however you want it dosent change the fact even the stripped down gtx460 for gaming STILL consumes more power then a 5850 and close to a 5870 but is 20-40% slower in single card configs and the 480 while being the fastest single gpu card is still only 2nd fastest in terms of single card overall

and you seem to like talking about r 600 and r 300 what about the g92 lol been in use for nearly 4 years with no re improvement where as ati has scaled there design to offer better performance you can argue your points all you want because fact is the biggest baddest gpu nvidia had which was the 512 shader gtx 480 was only 5% faster then the 480 shader current high end both companies have room for improvement. and how is the r600 design pathetic? last i checked the 5850 5870 and 5970 are highly competitive compared to nvidias offerings and in general use nearly 35-40% less power to do so and if its truly fail then why did Nvidia lose so much market share to this pathetic design?
read read read my previous post first!

fermi consume 40% more power and 50% larger die size was because of too many non addon feature(GPGPU). if they can remove it amd will be in seriously trouble. if gtx 480 is no longer a gpgpu the die size will shrink at least 40% while keep the same spec. you seem don't understand how important rops and bus would cause huge performance hit don't you? yeah of cause gtx 480 is still second pace in single card solution. but don't forget this, dual gpu PCB cost about twice as much as a GTX480 can be. the wiring is far far complex than any of single gpu board content. let me tell you this a gtx 480 board is far cheap than hamlock XT board the only disadvantage was die size that's all. if amd insist making only dual gpu board they'd have to consider more cost in layout than an "UNNECESSARY" 512bit bus board can be. GTX460's case is more like 5830, it cut off many feature while didn't optimize transistor make it more inefficient on its die size(a 324mm^2 "cripple" die would generate power of 400mm^2 which seem logical).

you keep talking how well r600 art is while ignore the fact they sacrifice the performance for saving production cost. g92 came out at november 2007 which is not even 3 years yet where did you get the idea of 4 years? even g92 is old but not even hd 4800/5770 still cant even out pace g94 in tremendously margin and gts 250 is on the side compete with 4850 and amd still couldn't get and better midstream product to outcast old g92 line :D where were amd doing these year :D and nvidia did lose share only because of discontinue of g92/g94 line while there are no product to replace it. they only lose a bit because of school opening season and they have nothing to sell. not because of fermi. and yet r600 IS a pathetic design no matter you agree or not.
Posted on Reply
#73
DrPepper
The Doctor is in the house
cheezburgerhigher clockrate and even higher clockrate....where did i heard that before? oh netburst from intel! do you really think clockrate is important? no it is IPC and DPR that are important.
High clockspeeds do not = netburst. The clockspeeds for i7 are just as high as they were for netburst except netburst could go higher, yet it can do much more.
4870 is still fall behind of 8800gts 640 in crysis and fear,quake war. in most of test bench clockrate do little to nothing to real time gaming and shader only do well when it's right architecture(like g80) or game that's well optimized to buzzard shader like amd's 5D (like hawk and unreal tournament ). may be only 3dmark favor higher clockrate!?
Total garbage.
unless you want to buy a crappy "high end" like 4870 that couldn't even compete 8800gtx in 70% of games. these useless shader only work on game that based on unreal 3 engine, but again unreal 3 engine is shit and only exclusive for console. like i said before if they get rip of 5D shader and make it 4D or even 3D shader and turn it to pure gaming card will save plenty of space.
More garbage.
Posted on Reply
#74
btarunr
Editor & Senior Moderator
Stick to the topic, people.
Posted on Reply
#75
vMG
_JP_They are DisplayPorts.
Not very used and known, because most screens nowadays don't support the connector.
It's meant for Apple products.
Posted on Reply
Add your own comment
May 7th, 2024 21:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts