Thursday, May 24th 2012

EA-DICE Frostbite Titles in 2013 Will Require 64-bit Windows

While content-creation and media transcoding applications have transitioned to native x86-64 applications that can take advantage of large amounts of system and video memory, a similar transition by game developers has been rather slow. Very few PC games ship with 64-bit executables, as most games are ported from game consoles anyway, which have slim system requirements.

EA-DICE has been behind developing games that take advantage of the latest PC technologies (such as DirectX 11), and according to a lead developer and rendering architect with the studio, Johan Andersson, games that are driven by Frostbite engine, which are slated for 2013, will require 64-bit operating systems, these games will not run on 32-bit Windows, or in 32-bit mode, on 64-bit Windows, but with full-fledged 64-bit executables. The 64-bit address-space would allow games to take advantage of system memory over 4 GB, and more importantly, high amounts of video memory, as 2 GB and 3 GB become standard with performance-segment graphics cards.


Add your own comment

113 Comments on EA-DICE Frostbite Titles in 2013 Will Require 64-bit Windows

#1
FordGT90Concept
"I go fast!1!11!1!"
It's a series of #IFDEF in the code. It isn't very difficult to write code to support compiling both. Competent programmers have been doing this for many years.


Vinska said:
You, maybe. Those who want to build a supercomputer with all the memory in one address-space, might find 40-bit addressing a bit of a problem.
That's 128 sticks of 8 GiB (highest density available) RAM for ONE processor. I don't know of any systems that even come close to that. Systems with >1 TB RAM are clusters. Each processor has a small pool of memory and they send requests amongst themselves to copy data between them. A processor in one node of the cluster can't directly access the RAM of another without requesting it through the target node's processor(s).
Posted on Reply
#2
Vinska
FordGT90Concept said:
It's a series of #IFDEF in the code. It isn't very difficult to write code to support compiling both. Competent programmers have been doing this for many years.
Yet it's doesn't take much for the code readability to become like this.
Thus, the less #ifdef and such needs to be used, the happier the devs are. That is a good thing for the users, if You know what I mean...
Posted on Reply
#3
eidairaman1
its funny but there is no means of getting the 1 and 0 anymore efficient then they are lol
Posted on Reply
#4
MxPhenom 216
Corsair Fanboy
seronx said:
8GB DDR3 Single Channel + Windows 8 Pro + 512-GB/1-TB SSD: 2013 HERE I COME!!!
single channel? Fail.

Why not dual channel for 16GB
Posted on Reply
#5
FordGT90Concept
"I go fast!1!11!1!"
Vinska said:
Yet it's doesn't take much for the code readability to become like this.
Thus, the less #ifdef and such needs to be used, the happier the devs are. That is a good thing for the users, if You know what I mean...
The only thing that really bothers developers is feature creep. It's gonna suck when EA tells DICE the number of support calls their getting and DICE has to go back to all the code they wrote specifically for 64-bit and add those IFDEFs retrospectively. It would have been wise to do it in the first place.
Posted on Reply
#6
newtekie1
Semi-Retired Folder
FordGT90Concept said:
It's a series of #IFDEF in the code. It isn't very difficult to write code to support compiling both. Competent programmers have been doing this for many years.
Writing the code is probably not the issue, it isn't hard to program a game to use a DX9 rendering path either, but EA-DICE doesn't do it. Beyond just writing the code, there is testing it. Having a 32 and 64-bit exe means more QA time. Especially with the 32-bit exe, because they are obviously writing the game to be 64-bit native, and are also assuming at least 4GB of RAM(otherwise why 64-bit?). So to support 32-bit means they will have to do a lot of QA to make sure the game doesn't crash or freaks out when limited to less than 4GB.
Posted on Reply
#7
Dippyskoodlez
FordGT90Concept said:
It's a series of #IFDEF in the code. It isn't very difficult to write code to support compiling both. Competent programmers have been doing this for many years.
Because that is an easy fix for actually utilizing >4gb of ram. :rolleyes: (among other possible 64 bit features.)
Posted on Reply
#8
eidairaman1
nvidiaintelftw said:
single channel? Fail.

Why not dual channel for 16GB
because performance gains are negligible between single and dual channel, machines like bigger frame buffers, it really doesnt care about what speed they operate at since now most FSBs run at a different speed than the Ram itself
Posted on Reply
#9
Dippyskoodlez
eidairaman1 said:
because performance gains are negligible between single and dual channel
No, no they are not. Where did you get this information? If you're gonna argue this, you better provide some hard benchmarks.
Posted on Reply
#10
Aquinus
Resident Wat-man
Dippyskoodlez said:
No, no they are not. Where did you get this information? If you're gonna argue this, you better provide some hard benchmarks.
eidairaman1 said:
because performance gains are negligible between single and dual channel, machines like bigger frame buffers, it really doesnt care about what speed they operate at since now most FSBs run at a different speed than the Ram itself
You're both right. Single threaded performance will see no benefit of memory running in unganged dual-channel mode where the channels are split in to two 64-bit memory interfaces enabling two separate memory operations per full dram read/write cycles, however applications that have long words in memory could benefit from dual-channel ganged mode (1 large 128-bit memory interface, rather than 2 smaller ones.) Dual-channel and unganged mode show their colors when you use two threads, like how quad-channel doesn't show its true colors unless you start using 3 or more threads. This is an excellent example of how the 2600k and 2700k compare to the 3820, where single-threaded memory tasks on SB takes off, but add a couple threads and the 3820 eats the 2600k alive beyond 2 threads of heavy memory usage. I've only owned AMD motherboards that have allowed me to run dual-channel memory in ganged mode, though, but I'm not sure if this is a AMD specific ability (I wouldn't be surprised if it wasn't.)
Posted on Reply
#12
FordGT90Concept
"I go fast!1!11!1!"
newtekie1 said:
Writing the code is probably not the issue, it isn't hard to program a game to use a DX9 rendering path either, but EA-DICE doesn't do it. Beyond just writing the code, there is testing it. Having a 32 and 64-bit exe means more QA time. Especially with the 32-bit exe, because they are obviously writing the game to be 64-bit native, and are also assuming at least 4GB of RAM(otherwise why 64-bit?). So to support 32-bit means they will have to do a lot of QA to make sure the game doesn't crash or freaks out when limited to less than 4GB.
The extra QA is for 64-bit (troubleshooting usage of the extra registers), not 32-bit. It is also quite simple to tell if a program ran out of memory and custom service already recommends upgrading computers frequently. It represents very little change from the current support model.
Posted on Reply
#13
Dippyskoodlez
FordGT90Concept said:
The extra QA is for 64-bit (troubleshooting usage of the extra registers), not 32-bit. It is also quite simple to tell if a program ran out of memory and custom service already recommends upgrading computers frequently. It represents very little change from the current support model.
If you're going to cater to a game engine crippled by 32 bit, then why even code a 64 bit version? :rolleyes:

i.e. crysis 2. Made for xbox, plays like an xbox on PC. BF3, made for a PC, plays like a PC.
Posted on Reply
Add your own comment