Hi all Sadly maths is not my strongest point so this may seem like a very basic question but hey ho.... I have a range of values from 0-1023 how can i scale them down to 0-127? Its on a pic board from a pot ive sorted my ADC out, it combines the high and low byte and that's the value range it gives me. If possible it would be nice to know the equation of how you did it rather then just times by X etc. Then i can use it to control other parameters also. Many thanks guys n gals

Found this on the interwebz Let's say you want to scale a range [min,max] to [a,b]. You're looking for a (continuous) function that satisfies f(min) = a f(max) = b In your case, a would be 1 and b would be 30, but let's start with something simpler and try to map [min,max] into the range [0,1]. Putting min into a function and getting out 0 could be accomplished with f(x) = x - min ===> f(min) = min - min = 0 So that's almost what we want. But putting in max would give us max - min when we actually want 1. So we'll have to scale it: x - min max - min f(x) = --------- ===> f(min) = 0; f(max) = --------- = 1 max - min max - min which is what we want. So we need to do a translation and a scaling. Now if instead we want to get arbitrary values of a and b, we need something a little more complicated: (b-a)(x - min) f(x) = -------------- + a max - min You can verify that putting in min for x now gives a, and putting in max gives b. You might also notice that (b-a)/(max-min) is a scaling factor between the size of the new range and the size of the original range. So really we are first translating x by -min, scaling it to the correct factor, and then translating it back up to the new minimum value of a.

... or given the numbers (scale) in the OP, you could just divide by 8 and toss the remainder (truncate don't round). This is the same as doing 3 shift rights in a digital circuit or if working with binary (integer) data. Y = X >> 3

When working with integer data, don't most of the (good) compilers do this anyway even if you code using arithmetic operator instead of byte/word level shift operations?