1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RealTemp General Discussion

Discussion in 'RealTemp' started by unclewebb, Jun 28, 2008.

  1. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    I've been working hard to finalize 3.60 and to get it uploaded here at TPU but I don't have access to any new hardware so it takes me forever to test new features and get any meaningful feedback from users. It's been like this for the last year. When I have the time and am motivated to do some programming, it takes too long to find out what new features work and which ones don't. The result is that I am endlessly waiting and end up working on other projects and lose interest in project RealTemp.

    If you are a dedicated tester like my friend burebista is and if you have access to some new Core i hardware, desktop or mobile, then send me a PM and let me know that you would like to help.
  2. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    RealTemp 3.60
    http://www.techpowerup.com/downloads/1872/Real_Temp_3.60.html

    It took a long time to finish it but it's finally official.

    The most recent addition is the ability to control the turbo multiplier and turbo TDP/TDC values in the newer Core i Extreme and K series CPUs. Intel plans for more K series CPUs in the near future so hopefully this will work with them.
    rickss69 and mlee49 say thanks.
  3. mtosev

    mtosev New Member

    Joined:
    Mar 21, 2005
    Messages:
    1,463 (0.44/day)
    Thanks Received:
    145
    Location:
    Maribor, Slovenia
    Looks like i've found a bug:
    [​IMG]


    RealTemp and CPU-z don't report the same frequencies. btw Intel's power saving is disabled
  4. Super Sarge

    Super Sarge New Member

    Joined:
    Aug 16, 2009
    Messages:
    170 (0.10/day)
    Thanks Received:
    22
    Location:
    Jordan MN
    I am happy with 3.40, works fine on my I7 920 machine, I tried the beta version of 3.6 and it cause BSODs had to go back to 3.40. Like the man said if it ain't broke do not fix it.
  5. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    You're right, you have found a bug but the bug is with CPU-Z. At idle CPU-Z may not report the correct multiplier. I guess it does this so users can do a validation at a high MHz.

    If you have C1E enabled, try turning that off and see what the two programs report. With your core voltage down at 0.968, it looks like you have some power saving going on regardless of what you might have selected in the bios.

    RealTemp follows the method recommended by Intel in their November 2008 Turbo White Paper. CPU-Z does not. Core Temp also follows the correct method to determine the multiplier so you might want to try comparing to that.

    Super Sarge: Thanks for the bug report but can you tell me a few more details like what operating system you are using, what CPU, etc. and what happens when you try to run the program. Does it start up and then crash or does it not start up at all? I can't fix a problem if I don't have any idea what the problem is.
    kzinti1 and mtosev say thanks.
  6. Super Sarge

    Super Sarge New Member

    Joined:
    Aug 16, 2009
    Messages:
    170 (0.10/day)
    Thanks Received:
    22
    Location:
    Jordan MN
    I really do not know it was quite awhile ago, I think it had something to do with C1E and or Turbo both of which I use at times, I use Turbo all the time My OS is W7 64 bit Pro, Intel 920 CPU D0. 12 Gig Mushkin Triple Channel 1600 MHZ Red Lines. Program ran but sometime during the day or night I would get a crash, I re-installed 3.40 and problem never happened again

    I just loaded 3.60, I will try it again. I unchecked EIST as I do not nor can I find any sush setting in MY BIOS version 1003 for an ASUS P6t Deluxe V2 which is the latest BIOS
    Last edited: Oct 1, 2010
  7. mtosev

    mtosev New Member

    Joined:
    Mar 21, 2005
    Messages:
    1,463 (0.44/day)
    Thanks Received:
    145
    Location:
    Maribor, Slovenia
    frequncies are now the same after i disabled C1E in the bios.looks like CoreTemp uses the same principal as it reported the same as RealTemp did
  8. mlee49

    mlee49

    Joined:
    Dec 27, 2007
    Messages:
    8,462 (3.67/day)
    Thanks Received:
    2,096
    Thanks for your dedication, I appreciate it :)
  9. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    Thank you mlee49 for your support of RealTemp where it counts the most. $$$$$

    It doesn't take a lot of money to motivate me but even a handful of donations can make the difference between carrying on with this project or walking away from it.

    I was able to add some very useful features lately and I also got the 6 core version of RealTemp GT updated too. Being able to adjust the multipliers in the Core i Extreme and K series CPUs is a great new feature for RealTemp and is going to be even more useful in the new year when Intel decides to start releasing more K series CPUs for enthusiasts.

    Now for the big announcement. Finally I won't have to constantly defend myself from people that are always asking, "How come RealTemp is not the same as CPU-Z?" On the XS RealTemp forum today, the programmer of CPU-Z, in his own words, finally decided to come clean.

    I also showed him why I don't believe that TMonitor is any more accurate than CPU-Z is at idle but that's still a discussion in progress.

    Here's an example of what TMonitor tells me for my T8100.

    [​IMG]

    This CPU presently has EIST disabled. When you disable EIST in a Core 2 based CPU, the CPU gets locked at a fixed frequency. The multiplier reported in MSR 0x198 never changes from idle to full load.

    Using Intel's recommended method to determine the multiplier, RealTemp and ThrottleStop correctly show that the CPU is locked at the 11.5 multiplier.

    TMonitor is telling me that at idle the multiplier is at 6.0 and when I apply a load to the CPU, the multiplier goes up and down. That's wrong. The multiplier does not change when EIST is disabled. It can't. If you want to argue, that's great but you need to argue with Intel. TMonitor is just as inaccurate when run on Core i CPUs. It draws a nice graph but the information it is graphing is fundamentally wrong and inaccurate so it's pointless. TMonitor would be a very useful tool if it followed Intel's methods but there's no point in telling users that their CPU is doing something that it isn't.
    ------------------------------------------------------------------------------------------------
    Thanks Super Sarge for giving RealTemp 3.60 a fresh try.
    mlee49 says thanks.
  10. MoonPig

    MoonPig

    Joined:
    Aug 7, 2008
    Messages:
    5,710 (2.74/day)
    Thanks Received:
    886
    Location:
    Wakefield, UK
    Used to love Realtemp on my Q9550. Was the ONLY temp monitor i used.

    Pity my 1055T isn't supported :(

    Keep up the good work matey :)
  11. mlee49

    mlee49

    Joined:
    Dec 27, 2007
    Messages:
    8,462 (3.67/day)
    Thanks Received:
    2,096
    Ha, seems like your the only one who is right these days.
  12. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    Maybe. I don't have access to any Sandy Bridge hardware and everyone I've approached is afraid to share any information with me for fear that it will break their NDA agreement with Intel and then they might get in trouble.

    I've made a few minor adjustments so it can extract the new 4 digit model numbers for a version 3.62. If anyone has some SB hardware and wants to do some testing, send me a PM. I can keep a secret. :)
    modder says thanks.
  13. rickss69

    Joined:
    Aug 23, 2009
    Messages:
    2,431 (1.43/day)
    Thanks Received:
    604
    Location:
    Rockvale TN (Not Australia)
    Is there any way to get RealTemp to report correctly for an Atom?

    I would be happy to donate if I knew where to go...Lord knows I have used it enough. :laugh:

    Attached Files:

    • atom.jpg
      atom.jpg
      File size:
      195.1 KB
      Views:
      85
    Last edited: Nov 27, 2010
  14. rickss69

    Joined:
    Aug 23, 2009
    Messages:
    2,431 (1.43/day)
    Thanks Received:
    604
    Location:
    Rockvale TN (Not Australia)
    Now it's working! You did something didnt you? ;) Now where is that donate button...:toast:

    This is going to be running 24/7 for my MagicJack and I just wanted to know how the temps would be. Thanks!

    Attached Files:

    Last edited: Nov 27, 2010
  15. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    I'm as surprised as you are to see it working on your Atom CPU. :)

    According to Intel, RealTemp is using the correct TJMax value for your CPU too.

    http://ark.intel.com/Product.aspx?id=43098&code=Intel® Atom™ Processor D510 (1M Cache, 1.66 GHz)

    The sensors that Intel uses have never been very accurate at reporting low temperatures. It looks like even with the right TJMax value, your sensors are out to lunch unless you live on the North Pole. That's not unusual.

    If the load meter is not working correctly, try clicking on the TM Load box in the Settings window. Most of the Atom CPUs are missing some internal timers that RealTemp depends on so the TM Load option should give you a Load value similar to what the Task Manager will show you.

    Turn on a system tray temperature icon, right click on it and then select the About... option. Hiding in there should be a Donate button. Thanks for supporting free software.
    rickss69 says thanks.
  16. rickss69

    Joined:
    Aug 23, 2009
    Messages:
    2,431 (1.43/day)
    Thanks Received:
    604
    Location:
    Rockvale TN (Not Australia)
    I tried those workarounds but no go. Bios reports it at 27c...that sounds about right. Guess we will just see how long it holds up. ;)
  17. id0l New Member

    Joined:
    Dec 27, 2010
    Messages:
    7 (0.01/day)
    Thanks Received:
    1
    I like the new version (fixed the reset button crash bug for me) and am glad to see ATI GPU monitoring support but...it seems there is an issue with it causing the video card to cycle 2d/3d modes constantly. See attached screenshots from GPU-Z.

    I had to switch back to 3.40 to avoid this happening. If I hadn't have flashed my 4890s BIOS to maintain 1125mhz memory speed regardless of 'power mode' my screen probably would have been a flicker-fest. :eek:

    This side effect is present regardless of turning off GPU monitoring, increasing/decreasing the poll rate, or disabling ATI support.

    Also, it makes my video card generally idle hotter.

    I really like RealTemp and have used it for a long time, but I registered here because this bug really annoys the hell out of me. :)

    Really liking the custom font support. :p

    Attached Files:

    Last edited: Dec 27, 2010
    Mussels says thanks.
  18. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    The problem you are having doesn't make much sense. If you disable ATI GPU monitoring in RealTemp, it can't be causing your GPU to cycle between 2D and 3D. The ATI code in RealTemp will be completely bypassed and not used at all if you disable ATI monitoring in the Settings window. If RealTemp is not accessing your GPU then I don't understand how RealTemp can be causing the problem you are having.

    Did you change your 2D and 3D settings with a bios editor? If you used something like RBE then maybe upload your bios somewhere so I can have a look at what settings you are using.

    How about when you are not running GPU-Z or RealTemp and just monitoring with the Catalyst Control Center. Is it steady in Windows in 2D? Try to think of a few more tests to try and isolate this problem. I have a 5770 card for testing and in Windows with RealTemp running, the GPU is steady in 2D mode. It might occasionally go into 3D mode when needed to if I start a desktop game but when not needed, it drops back to 2D and stays there. Let me know if you can figure this out
  19. id0l New Member

    Joined:
    Dec 27, 2010
    Messages:
    7 (0.01/day)
    Thanks Received:
    1
    Here's the link for my 4890 .ROM file.

    I use RBE v1.25 to make clock speed adjustments in the BIOS - all voltages are stock. All I have changed is the [overclocked] memory speed to remain constant at 1125mhz across all power states (raising/lowering on the fly makes the screen flicker, common on these cards) and increased the 3D GPU clock speed to 975mhz.

    Using CCC shows the GPU clock sitting at 240mhz. But...I don't really trust any readings from CCC :rolleyes: and thus I don't use it. Even still, my GPU temperature idles 3-4c higher using RealTemp v3.6 - shouldn't that be an indicator that something is wrong? I have verified that I have GPU monitoring disabling by unchecking the box next to 'ATI' in the RealTemp settings page. My card idles at a higher temperature under RealTemp v3.6 regardless of whether GPU-Z is open or not.

    I can open v3.4 and watch the GPU clock speed in GPU-Z stay at 240mhz all day (and temps are lower at idle)...but when I open v3.6 it all starts going crazy. :confused:
  20. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    If something is causing your GPU to rapidly cycle between 2D and 3D then that would explain why it is idling 3C or 4C higher. What I can't understand is if you disable GPU monitoring in RealTemp, the ATI code that RealTemp uses is completely bypassed and does not run at all. After ATI is unchecked, RealTemp does not interact with your GPU in any way.

    I'm not trying to disagree with you or make RealTemp out to be innocent. I'm just trying the best I can to trouble shoot this problem you are having.

    Are your GPU overclock settings stable while running Furmark?

    It's bedtime here. I'll have a look at your rom file settings tomorrow in RBE. This is the first report of a problem like this so I'm very interested in trying to figure it out.
  21. id0l New Member

    Joined:
    Dec 27, 2010
    Messages:
    7 (0.01/day)
    Thanks Received:
    1
    Hey, don't get me wrong Unc, I'm just as confused as you are. :) I wasn't trying to point fingers. I would assume that RealTemp wouldn't touch my GPU if I disabled monitoring, but from what I can tell it's still doing something odd with it. I mentioned that it happened regardless of GPU-Z being open because I thought it may have been some kind of 'conflict' between the 2 programs causing the issue but I don't think that's the case.

    FurMark is 100% stable after a 2 hour run with max GPU temp coming in at 64c (I have aftermarket VGA cooling).

    Take a look at my BIOS file when you get the chance and let me know if you see anything weird. The whole reason I had to use RBE was to lock in the memory at a constant frequency. Adobe Flash Player, when it was updated a while back, enabled video hardware acceleration and every time a flash movie would play (i.e. YouTube) my screen would flicker at the beginning. I tracked this down to the memory speed jumping from low power settings to high power settings. Apparently that causes screen flickering with the 4890s (GPU clock increasing/decreasing does not cause screen flickering; only memory). Locking the memory speed to 1125mhz across the different power states solved this issue (and I don't have to use CCC anymore for overclocking, HOORAY!). :cool:

    Seriously though, I have been using RealTemp since I got my E6600 (perhaps before). That was probably 4-5 years ago. If I didn't love this little app I wouldn't use it religiously. :) Heck, I think I finally figured out a decent way to 'read' my CPU cores and tune their TJmax and Idle Calibration points to where the reported temps are somewhat accurate (at least, they seem that way - they all read very close under full load).
    Last edited: Dec 27, 2010
  22. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    I found the Radeon bios that your bios is based off of and I didn't see anything too unusual. The only thing I noticed was the T min hysteresis is 4 in the original bios and you have set that to 0. You also checked the PWM ramp on while this is unchecked in the original. Switching to the look up table to get your fan to run constantly at 100% looks like it should work. All of your bumped up clock settings seem fine so nothing is jumping out at me.

    When testing, try opening up the Task Manager and see if there is anything Adobe related that is running in the background. Adobe has a habit of sliding things into your start up sequence that run in the background without most users knowing about it. I use Autoruns to pick through all the places that start up itmes can be hidden in a Windows PC.

    http://technet.microsoft.com/en-us/sysinternals/bb963902

    Try using CCC to monitor. Does it show the GPU switching back and forth from 2D to 3D with the clocks jumping up and down? Does running or exiting RealTemp change what CCC reports? Can you kill any tasks in the Task Manager that are Adobe related and see if that changes anything.

    After a good sleep I will have a thorough look at my code to see if I can see anything unusual and try to think of anything else that you can test to find out what's going on.
  23. id0l New Member

    Joined:
    Dec 27, 2010
    Messages:
    7 (0.01/day)
    Thanks Received:
    1
    I did put the look up table to all 100% across the board but that pretty much a moot point because my vid card fans run at 100% anyways (the TRad2 heatsink has 2x 92mm fans attached to it; both connected to motherboard fan headers). I probably did edit Tmin hysteresis, as it seems vaguely familiar, but I believe all of those settings have to do with fan speed if I'm not mistaken.

    Do you have a stock RV790 BIOS I can look at? Really though, I'd just like to have one as a backup as mine "somehow" got deleted. I know, I'm terrible. :p I deleted it by accident.

    There is nothing Adobe running in the background on my system as I make sure to disable any extraneous programs though either msconfig or services.msc...I also have Autoruns. :)

    Like I said CCC reports the 2D clock just sitting at 240mhz (idle) like it should. But considering the higher idle temps and what GPU-Z is reporting (which I trust more), I still think something is off.

    Definitely let me know what you find!
    Last edited: Dec 27, 2010
  24. unclewebb

    unclewebb RealTemp Author

    Joined:
    Jun 1, 2008
    Messages:
    926 (0.43/day)
    Thanks Received:
    400
    TechPowerUp is a God send when you forget to save a bios before tearing into it with RBE. :)

    http://www.techpowerup.com/vgabios/

    RealTemp gets its GPU temperature data and clock data directly from CCC so if CCC is screwed up then RealTemp would be screwed up too. The default sensor reading interval for CCC is usually 5 seconds. If you turn off GPU-Z and turn off CCC and just use RealTemp for GPU monitoring and set the interval to 1 second, does it show this fluctuating GPU core speed?

    I tend to trust GPU-Z more than CCC as well. Are you using the 10.12 driver?

    The other test would be if RealTemp and GPU-Z are both running and GPU-Z is reporting this varying GPU speed, does GPU-Z immediately flat line in 2D mode at 240 MHz after exiting RealTemp? I see another sleepless night tonight. :)
  25. id0l New Member

    Joined:
    Dec 27, 2010
    Messages:
    7 (0.01/day)
    Thanks Received:
    1
    No! It doesn't...it seems to be solid at 240mhz! However as soon as I start up GPU-Z the funkiness returns. So I closed RealTemp and tried opening just CCC and GPU-Z, thinking that from what you said, it may have had something to do with CCC itself, and guess what I find. Running CCC + GPU-Z without RealTemp going shows the same sporadic 'jumping' in the GPU clock (see attached pic). :confused: Now I'm starting to think that this issue lies more within GPU-Z or perhaps CCC itself, or the way GPU-Z is polling CCC/the video card (yeah, I don't really know how it works :)).

    I am using 10.7 currently. Hmmm...perhaps I will update tomorrow...I didn't know there was a new driver out. Though I am curious, now there are several different versions of CCC:

    1. AMD Catalyst 10.12 Preview for Windows 7– Featuring the new Catalyst Control Center (110mb)
    2. Catalyst Software Suite (64 bit) English Only (72.6MB)
    3. AMD Catalyst™ Accelerated Parallel Processing (APP) Technology Edition (88.9MB)

    No clue as to what #1 and #3 contain or if I should get one of those versions. I usually grab #2.

    Yes! And therein lies the rub. :p

    So, in summary, I now realize:
    RealTemp v3.60 running by itself = no problem. GPU-Z running by itself = no problem. Running CCC by itself = why would I do that (lol)? Running GPU-Z with either RealTemp or CCC = GPU clock goes crazy. :)

    Thanks for the link to the BIOS files. I see two for the ATI Radeon 4890 1GB but don't know which one is correct (perhaps they are the same?).

    Attached Files:

    • ccc.gif
      ccc.gif
      File size:
      15.4 KB
      Views:
      62
    Last edited: Dec 27, 2010

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page