1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Details Plans to Deliver 25x APU Energy Efficiency Gains by 2020

Discussion in 'News' started by Cristian_25H, Jun 19, 2014.

  1. theoneandonlymrk

    theoneandonlymrk

    Joined:
    Mar 10, 2010
    Messages:
    3,411 (2.02/day)
    Thanks Received:
    572
    Location:
    Manchester uk
    Quick ive got one , get the net.
    :D
     
    More than 25k PPD
  2. Dj-ElectriC

    Dj-ElectriC

    Joined:
    Aug 13, 2010
    Messages:
    2,214 (1.45/day)
    Thanks Received:
    843
    This just in: AMD are planning to exist until at least 2020.


    [humour]
     
    HalfAHertz and theoneandonlymrk say thanks.
  3. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,474 (1.29/day)
    Thanks Received:
    520
    AMD might be making plans for the future, but that doesn't mean that the competition stands still.
    This also seems to reflect the classic mindset that dug a large hole for AMD in the first place. A single minded focus on what would become K8 and Bulldozer whilst almost totally ignoring the competition and expecting Intel to persevere with Netburst and Core respectively. Intel might be wedded to x86, but that doesn't mean it's their sole focus - they do have an ARM architectural licence, and the IP deal (rare for Intel) with Rockchip tends to point to some diversification in processor strategy.

    @GhostRyder
    You keep dreaming those dreams son. The naïveté is refreshing. Last time I checked, Intel was built across quite a few product lines - and even taking CPUs in isolation, they basically own the x86 pro markets. HSA is all nice and dandy but at some stage it has to progress to actual implementation rather than a PPS decks and "The Future IS..." ™. For that to happen, AMD need to start delivering. They won't have IBM on board, and Dell, Cisco, and HP are all firmly entrenched in the Intel camp.
     
  4. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,455 (6.47/day)
    Thanks Received:
    2,174
    Location:
    Concord, NH
    The problem is that regardless of what AMD does, Intel can always be one step ahead because of how much money Intel has available for things like R&D.

    Seriously, how do you think AMD plans to contend with CPUs like the C2750? It's like an i5, without an iGPU, twice as many cores, and the PCH put onto the CPU. It's everything you could ever want from a low power CPU with the exception of half-decent graphics, but Intel already knows how to play that game with the Iris Pro and if the consumer market ever demanded it, I'm sure Intel would deliver and it's important to remember that Intel's iGPUs aren't as crappy as they used to be (most people don't game, keep that in mind too.)

    AMD should take all this PR funding and put it into R&D because pandering to the masses isn't going to make their hardware any better than it already is. I don't see Intel making claims like this nearly as often as AMD does when it comes to PR.

    With all of this said, I still love my AMD graphics cards but I'm glad I decided to get an i7.
     
  5. GhostRyder

    GhostRyder

    Joined:
    Apr 29, 2014
    Messages:
    1,145 (6.54/day)
    Thanks Received:
    422
    Location:
    Texas
    @HumanSmoke and you keep blowing that ignorant smoke. Funny read as per usual. Gee wonder why IBM, Dell, and some of the other camps are stuck with intel, could be that whole business that just got another settlement recently or in the past, pick your poison. Did I ever mention HSA once?

    But then again I expect nothing less from you hence why I rarely care anymore what you have to say. Keep posting I have nothing to say to you.

    Maybe, no one is saying an i7 is not as good as anything amd has on the table. They have the best performance right now and it's not going to change for awhile.
     
    Last edited: Jun 20, 2014
  6. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,455 (6.47/day)
    Thanks Received:
    2,174
    Location:
    Concord, NH
    ...or power efficiency for that matter and if Intel's iGPUs continue to improve, AMD is going to lose the iGPU advantage as well which leaves them with nothing but cost. I don't know about you, but that troubles me.
     
  7. GhostRyder

    GhostRyder

    Joined:
    Apr 29, 2014
    Messages:
    1,145 (6.54/day)
    Thanks Received:
    422
    Location:
    Texas
    True, but the chips that do have iris pro are expensive at the time. The mobile market is where these matter and where they shine.

    Right now iris pros main advantage is that ram built into the chip. Depends on how far they take it, but I could dig it either way if offered at a decent price.
     
  8. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,474 (1.29/day)
    Thanks Received:
    520
    Well, talking of ignorance, IBM haven't been with Intel since the 5162......twenty-eight years ago
    Nope, but then I'd be surprised if you did since you think the computing business revolves around APU gaming laptops.
    And so it should, as well as everyone else for that matter. AMD's business strategy seems to be based on razor thin margins ( console and consumer APUs, ARM cores for server) which means they need large sales volumes. OK when you have OEM confidence and a locked down market. With the exception of the gaming consoles -which don't net big return, that isn't the case.
    How long do you think it would take Intel to jam an HD 5200 into any chip if they felt that their market dominance was threatened by not having it? This is a company with a huge fabrication overcapacity.
     
    Aquinus says thanks.
  9. GhostRyder

    GhostRyder

    Joined:
    Apr 29, 2014
    Messages:
    1,145 (6.54/day)
    Thanks Received:
    422
    Location:
    Texas
    "Are stuck", they use their own processor and mostly intel in the server and desktop world which now or are about to belong to Lenovo. Want a picture of an IBM machine with an intel processor inside?

    It being a media center since a 600 dollar APU laptop is highly unlikely to be a straight compute device. Im not surprised you don't understand there are people out there that are casual users that use there laptops as media houses and do not intend to spend a fortune on a laptop.

    Yea because over ten million devices sold is minor...

    Or they could just stick with what they usually do...

    Now then I'm done with you, and I'll leave on a nice Mark Twain quote which i should heed. I'm sure your response will be equally hilarious but I would rather not drag this thread any further off subject than this.
     
  10. Over_Lord

    Over_Lord News Editor

    Joined:
    Oct 13, 2010
    Messages:
    751 (0.51/day)
    Thanks Received:
    86
    Location:
    Manipal
    And yet you are still here :p

    Looks like somebody 'roofied' you.

    See what I did there?
     
    Prima.Vera says thanks.
  11. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,474 (1.29/day)
    Thanks Received:
    520
    Oh, you suddenly want to bleat on about servers when its IBM vs Intel, but when its AMD vs Intel, server share isn't a subject for discussion? You seemed to want to focus entirely upon consumer products. I adapted.
    You do realise that AMD's whole enterprise strategy is predicated upon HSA ?
    You flip-flop faster than a politician caught red handed with a rent boy
    Didn't you say that last time out? Oh, yes! You did
    :rolleyes: o_O
    So what? Sales mean f___ all if it doesn't translate into revenue.
     
    Aquinus says thanks.
  12. R0H1T New Member

    Joined:
    Apr 12, 2013
    Messages:
    26 (0.05/day)
    Thanks Received:
    21
    Ahem you were saying ~
    Some features of Skylake graphics architecture

    The fact is Intel has benefited greatly from the innovations AMD has brought to the x86 & general computing realm whilst the single biggest gift they've received from Intel in the last decade has been the bribes to OEM's circa 2006, in other words a stab in the back ! Also Nvidia is embracing HSA with CUDA 6 (software only atm) so what I see from your post is ignorance for one & secondly you (perhaps) think that Intel is pro-consumer when in fact they're virtually the exact opposite & their actions, like unfairly blocking overclocking on non Z boards just recently, over the last many years certainly proves this point !
     
    GhostRyder says thanks.
  13. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,474 (1.29/day)
    Thanks Received:
    520
    Directly from your quote:
    and this is not confirmed, this feature will allow the CPU and GPU to share system memory, which should boost performance of heterogeneous applications.
    Which heterogeneous applications would they be ? Would these be future applications or applications actually available?
    And? What has that got to do with AMD's strategic planning?
    Yep. That's Intel.
    Not strictly HSA, it's unified memory pooling and isn't Nvidia part of the OpenPOWER consortium rather than the HSA Foundation?
    If you think OpenPOWER, Intel's UMA, and HSA are all interchangeable on a software level I think you're going to have to show your working before you start bandying around terms like ignorance.
    Maybe you should stop ascribing conclusions based on comments that haven't been made.
    I'm a realist, and I see what the vendors do, how they achieve it, and the outcomes. Noting the facts doesn't imply anything other than noting the facts.
    Looks like you're just looking for an excuse to vent because this has absolutely no correlation to anything I've commented on.

    Looking for an argument that Intel isn't an abuser of its position? You won't find one here. Intel's modus operandi is fairly well known. Intel's failings as a moral company don't excuse AMD's years of dithering, changing of focus depending upon what others are doing, saddling themselves with a massive debt burden by paying double what ATI was worth, selling off mobile IP for peanuts, dismissing the mobile market in toto, and a host of missteps.

    You want to talk about ignorance? Blame Intel's bribery of OEM's (particularly Dell) to keep AMD out of the market? Know why the settlement wasn't bigger? AMD - thanks to Jerry "Real men have fabs" Sanders were too proud to second source foundry capacity. Bribes from 2006? Sure there were....AMD also couldn't supply the vendors it already had. Think that was a blip? Analysts were warning of AMD processor shortages years before this ever became acute. AMD complaining that Dell didn't want their processors was offset to a degree by OEM's complaining that AMD chips weren't available in quantity (so, 2002, 2006, and this from 2004 - see the trend), so AMD waited until vendors were publicly complaining* (and Jerry had been put out to pasture) before AMD struck a deal with Chartered Semi....and even then used less than half their outsourcing allocation allowed under the licence agreement with Intel.

    Sometimes the truth isn't as cut-and-dried as good versus evil.

    * Poor AMD planning causes CPU shortages: ....But European motherboard firms, talking to the INQ on conditions of anonymity, were rather more blunt about the problem. One described the shortages as due to "bad planning".
     
    Last edited: Jun 20, 2014
    Aquinus says thanks.
  14. R0H1T New Member

    Joined:
    Apr 12, 2013
    Messages:
    26 (0.05/day)
    Thanks Received:
    21
    Well Intel is going to implement HSA now whether they'll call it xSA or whatever remains to be seen, OpenCL 2.0 for instance brings SVM (shared virtual memort) support & unless Intel somehow plans to delay implementation of an industry wide Open standard in their iGPU's I don't see how that piece of info is speculation.

    OPENPOWER is completely separate from HSA, hUMA & OpenCL cause it's just something IBM's done to save their power based server live. As for Nvidia now since they're going to add OpenCL 2.x support to their GPU's it means they'll be jumping on the HSA bandwagon themselves, again it doesn't have to be called HSA to be implemented as such & I won't be surprised if MS brings OS level support for HSA in win9.

    Not really, I've heard this "HSA being vaporware" stuff more than once & it just irks me more every time I hear it. Lastly I'll add that it isn't AMD's fault that most software/game developers are fat ass lazy turds that need spoon feeding, I mean how long has it been since we've had multicore processors & the number of applications/games properly utilizing them is still in the low hundreds at best. It took the next gen consoles for game developers to add support for four or more cores in their game engines, it'll take something bigger for them to adopt HSA but I have very little doubt that those who don't or won't will become extinct, perhaps not in the next 5yrs but certainly in a decade or so. What we as consumers can do is support (software & game) developers that promote innovation & shun those who're dinosaurs in the making.
     
    Last edited: Jun 20, 2014
    GhostRyder says thanks.
  15. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,455 (6.47/day)
    Thanks Received:
    2,174
    Location:
    Concord, NH
    Are you a software developer? Do you write concurrent code that is thread-safe and works all the time? Yeah, I didn't think so. Keep your assumptions about how people like me do my job to yourself. Don't presume to talk about something where you have absolutely no idea what kind of work needs to be done accomplish what you suggest. There are a lot of considerations that need to be made when writing concurrent code, even more so when things like data order or the order that data is processed is important because when you introduce a basic (and common,) factor like that, the benefit of threading and multi-core systems goes out the window because you still have a bottleneck and the only difference is that you moved it from a single thread to a lock where only one thread can run at once, even if you spin up 10 of them.

    With all of that said, it pisses me off when people like you think that writing concurrency code that scales is easy when it's not.

    For it to scale, most of it needs to be parallel, not just a tiny bit of it and I can't even begin to describe to you how complex that can get.

    648px-AmdahlsLaw.svg.png
    So if 50% of your workload is parallel, you'll benefit from two cores basically. HALF of your work needs to be parallel just for a 2.0 speedup... and you want code to run on how many cores again?

    http://en.wikipedia.org/wiki/Amdahl's_law
     
    Last edited: Jun 20, 2014
    The Von Matrices says thanks.
  16. R0H1T New Member

    Joined:
    Apr 12, 2013
    Messages:
    26 (0.05/day)
    Thanks Received:
    21
    I don't think there's any need to get offended by something that's posted regularly on this & many other forums, though in a more subtle & (somewhat) polite way. What do you think X game being a cr@ppy port means or Y application being slow as hell on my octa core signifies ?

    Also people (including but not limited to developers) do need a push to get things done more efficiently, for instance how many browsers were using GPU acceleration before Google (chrome) pushed them into obsolescence ? How many browsers still don't use SSE 4x or other advanced instruction sets, this isn't just you I'm talking about but it also is not a blanket statement targeting every software/game developer out there, since I clearly put emphasis on most !

    Irrelevant since I didn't mention the type of workload & thus you shouldn't try to sell Amdahl's law as an argument in such case.
     
    Last edited: Jun 20, 2014
    GhostRyder says thanks.
  17. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,455 (6.47/day)
    Thanks Received:
    2,174
    Location:
    Concord, NH
    It signifies that maybe either the developers had little time and/or little funding to make an already existent game run on a different platform so it's realistic to assume that maybe the code can't easily be made to utilize more cores with investing a lot more time (which to businesses is money). They're only going to spend so much time on making it perform better than what it needs to.

    GPU acceleration started to become important on browsers because of the complexity of rendering pages now versus pages several years ago. Web applications are much more rich and have much more client-side scripting that goes on that alter the page in ways that make it more intensive then they used to be. Now that's just rendering, because it was becoming a bottleneck and in Google's case with Chrome, solved it. However that doesn't mean that chrome uses any more threads to accomplish the same task.

    You don't need to mention the type of workload for it to be relevant because performance and the ability to make any application performant on multi-core systems depends on the kind of workload. You can't talk about any level of parallelism without discussing the workload that is to be run in parallel. It's a selling point that making applications multi-threaded highly depends on the application, not all applications can be made to run in parallel and the impression you're giving me is that you don't believe that is the case and that is the point I was trying to prove with Amdahl's law.
     
    The Von Matrices and HumanSmoke say thanks.
  18. R0H1T New Member

    Joined:
    Apr 12, 2013
    Messages:
    26 (0.05/day)
    Thanks Received:
    21
    What would you say about the likes of EA (DICE) & their bug filled launch of BF4 OR is that you're downplaying the fault of developers in such a mess ? I would put winrar/winzip in the same category though they've done a lot especially in the last couple of years in implementing multi-core enhancements & hardware (OpenCL) acceleration respectively.
    The GPU acceleration was just an example of how developers need to be aware of the demands of this ever computing computing landscape before some of them become irrelevant btw you still didn't answer why don't major browsers implement SSE 4x or other advanced instruction sets ? FYI firefox had GPU acceleration even before IE & chrome but they enabled it by default only after chrome forced them to, the same goes for IE & their implementation of it since version 9.
    Again you're nitpicking on what I said, my basic point was that most developers (not all of'em) don't use the tools at their disposal as effectively as they could or rather as they should.
     
    Hilux SSRG and GhostRyder say thanks.
  19. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,455 (6.47/day)
    Thanks Received:
    2,174
    Location:
    Concord, NH
    That depends? If it's the fault of them using poorly designed libraries that they wrote in the past, it could be a cost/time saving measure, definitely a bad one, but the company could have pushed them down that road. It could be the developer's fault, but that really depends on the timeline they had for doing the work they had to get done. Development doesn't always go the way you want it to. Sometimes that's the developer's fault and sometimes it isn't. It's hard to say without being inside the company and seeing what is going on but, one thing is certain, it's definitely EA (DICE)'s fault as a whole. :) I don't dispute that for a second.

    I would put winrar/winzip and other compression utilities in the category of workloads that are more easily paralleled than others because of the nature of what they're doing. Once again, this comes down to the workload argument. Archival applications and games are two very different kinds of workloads, it's a lot easier to make something like LZMA2 to run in parallel than something like a game which is incredibly more stateful than something like an algorithm for compression or decompression. This isn't a matter of tools, you could have all the tools in the world but that won't change the nature of some applications and how they need to be implemented. OpenCL doesn't solve all programming issues and it doesn't mysteriously make things that couldn't be run in parallel to suddenly able to be. These tools you talk about enable already parallel applications to scale a lot better and across more compute cores than they did before, it doesn't solve the problem of having to make your workload thread-safe without being detrimental to performance in the first place.

    You complain about me nitpicking, but you're pointing out things that require that level of analysis and detail because problems like these aren't as easy to solve as you make them out to be.

    One question, have you ever tried to write some OpenCL code and running it on a GPU? Try doing something useful in it if you haven't and you'll understand real quickly why only applications that are mostly parallel code in the first place use OpenCL. I get the impression that you haven't so you shouldn't talk about something if you've never done it. I am, because I have... trust me, it's not intuitive, it's hard to use, and it's only helpful in very selective situations. I would never use it unless I was working with purely numerical data that was tens of gigabytes large or bigger and only if the algorithm I'm implementing is almost completely stateless (or functional if you will). Games (other than the rendering part, which GPUs are already doing,) hardly fit any of those criteria. It's not that developers aren't using OpenCL, it's that they can't or it doesn't make sense to in most real world applications in the consumer market.

    I do enjoy listening to you try to say what developers are and are not doing right when you're not in their shoes. Even as a developer I wouldn't presume to think I knew more about another developer's project than they do without even seeing the code itself and having worked with it. So I find it both amusing and disturbing that you feel that you can voice you opinion in such an authoritative way when not even I would make those kinds of claims given my own experience in the subject as I'm a developer professionally and I'm even working on a library that uses multiple threads.

    Tell me more about why you're right.
     
    The Von Matrices and HumanSmoke say thanks.
  20. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    8,361 (2.55/day)
    Thanks Received:
    1,216
    Comparative analysis shows this is plausible given the current trend of 0, I for one approve this message.

    **Edit** There is a lot of butthurt in this thread, over PR, nothing more. I am glad they have this goal.

    How about we debate the power useage and how perhaps a embedded capacitor(s) that can provide the peak power required when firing up more cores or execute units could provide us with a 200Mhz CPU that clocks to 4Ghz instantly?

    Decoupling capacitor built in anyone?
     
    Last edited: Jun 20, 2014
    GhostRyder says thanks.
    10 Million points folded for TPU
  21. Fx

    Fx

    Joined:
    Oct 31, 2008
    Messages:
    505 (0.23/day)
    Thanks Received:
    87
    Location:
    Portland, OR
    You sound like a uneducated fanboy. Useless APUs... really? You have some serious reading, comprehension, and contemplating to do.
     
  22. Prima.Vera

    Prima.Vera

    Joined:
    Sep 15, 2011
    Messages:
    2,246 (1.98/day)
    Thanks Received:
    293
    Seriously I don't get AMD. They are a huge company, so really, cannot they afford hiring 2 or 3 top design engineers to design a new top CPU that should compete with the latest i7 from Intel?? I mean, geez, even reverse engineer the stuff, or follow and try to improve Intel's design if they are in a lack of inspiration. The CPU design, architecture and even detailed charts and stuff are all over the internet.
    Honestly, I don't get it...

    That's a little childish I guess. Latest top i7 CPUs from Intel are also APUs.
     
    Last edited: Jun 22, 2014
  23. Franzen4Real New Member

    Joined:
    Jun 22, 2014
    Messages:
    7 (0.06/day)
    Thanks Received:
    2
    That was quite the amusing exchange lol! Gotta love the air charm developers. Anyways, if I may ask something somewhat on the topic of multi threaded gaming.. I happen to not be a developer and I do not write code, so it was enlightening reading your thoughts on the whole workload dependencies for multi threading. I too have wondered why gaming has taken a while to really embrace multi core processors and your explanation helps to understand some of those reasons (though I can say I never thought it was because devs are fat and lazy). My question though, is with Mantle and DX12, it seems that we are looking for more and more ways to off load the CPU as much as possible, and to me that seems like it makes the need to try and heavily thread games (which as you say may not really be possible anyways) kind of irrelevant. It seems like direction of making games is that the less and less the CPU is involved, the better. Certainly correct me if I'm wrong, but I don't understand why people like whats-his-name that was trying to argue programming with you, are wanting more and more CPU utilization when, to me at least, it seems pretty clear that the road to more performance in games relies less on the CPU and off loading more to the GPU. (Maybe they just want to justify their expensive purchase of a many core CPU's to play Battlefield? I dunno..)

    I apologize, I didn't mean to completely ignore the topic of the article in the first place... But as a user of both AMD and Intel (actually an AMD user from Socket A up until my first Intel build at the release of Core 2) I certainly hope that they achieve these goals simply for the reason that any innovation from any team is always good for us. I can remember when AMD first mentioned Fusion, and adding the GPU to the CPU die... then low and behold, here comes Intel taking that idea and running with it and beating AMD to market initially with a crappy solution (though AMD still has the better iGPU today) and look where we are now with integrated graphics. I for one do appreciate the strides made with iGPU's for systems such as my Surface Pro. (I would really like to see an AMD APU version of one!) Or when AMD puts the memory controller on die with the Athlon 64, then here comes Intel with Nehalem doing the same thing. It really is too bad that they took such a step backwards with Bulldozer when they seemed to have good momentum going and hanging with Intel back in those days. I really hope that the day may come again when they are close and drive each other to really innovate. I've been an Intel user now since Core 2 and would love to feel like I have another option when building my desktops.... possibly by the time ill be looking to replace my upcoming Haswell-E system?
     
    Last edited: Jun 22, 2014
  24. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,455 (6.47/day)
    Thanks Received:
    2,174
    Location:
    Concord, NH
    No, that's the kind of questions that people need to best asking. It's important to remember one basic thing: games are very complex and that Mantle and DX12 are only doing part of the task. More graphics and rendering related tasks are being offloaded to the GPU because that is where it belongs. However this doesn't change anything for game logic itself and I'm sure if you've played Civilization 5, you'll see how as the world gets bigger, the time it takes to end each turn takes longer and longer.

    There are really maybe three situations that I feel are important for multi-threading:
    A: When you know that you want, have everything you need to get it but is something that you don't need until later. (a form of speculative execution)
    B: When you have a task that needs to run multiple times on multiple items and doesn't produce side effects. (I.E. Graphics rendering or protein folding)
    C: A task that occurs regularly (every x seconds, or x milliseconds) and requires very little coordination.

    As soon as you have side effects or have tasks that rely on the output of several other tasks, the ability to make something usefully multi-threaded goes out the window and people might not realize it, but games are one of the most stateful kinds of applications you can have and just "make it multi-threaded" as it doesn't solve problems. In fact if the workload wasn't properly made to run in parallel, making an application multi-threaded can degrade performance when overhead is most costly than the speedup that's gained from it or even make the code that's executing more confusing because of any locking or thread-coordination you may have to do.

    I currently develop with Clojure, which is a functional language on top of the JVM among other platforms which I don't typically use (except for ClojureScript which is interesting).
    ...and

    To make a long story short, application state is what makes applications demand single-threaded performance and not managing it well is what reinforces that.
     
    Last edited: Jun 22, 2014
  25. m4gicfour

    m4gicfour

    Joined:
    May 21, 2008
    Messages:
    847 (0.36/day)
    Thanks Received:
    309
    DEM DANG AMD TURK URR JERBS or something.

    I concur.

    I, for one, would like one of these 0 new free cards. With a 25X improvement over the current 0 free cards, it should not be any issue for me to receive [RESULT UNDEFINED]

    Your post is invalid. Minimum butthurt level not met. Ignoring.
     

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page