CSXRT4 wrote:
I think the 32bit may be a lot different than the 16bit stuff. Or maybe I just didnt understand what you were describing in your thread. Im working with the 16bit stuff right now and it seems a bit less documented
I should clarify - yesterday I integrated a bunch of 16-bit information, courtesy of elevenpoint7five. There's lot more 16-bit stuff in there right now than there was 24 hours ago.
Quote:
Its probably a good thing nobody responded earlier to this thread, I learned a lot of things in the past couple days by just digging through everything. I found that when a table lookup occurs, index Y is loaded with the address that is 1 byte above the actual table. Then a subroutine(different one for 2D or 3D) is called and the result is stored. It looks like there are two bytes above a table that are important, the first byte is the column(x axis) count minus 1. The second seems to be a table type identifier?
Yes, and today there's some information in the getting-started thread about what those type bytes mean.

Quote:
So 0A - 0C (10 - 12 = -2)
condition requires borrow
10+256 = 266
266 - 12 = 254(FE)
Two's complement of -2 is FE
But I cant %100 grasp why the borrow is 256 and not 255, I have theory's but I would like a definite answer. Or maybe im looking at this completely wrong?
Imagine using decimal notation and being limited to one digit (and no sign) so you can only really represent 0-9. If you wanted to calculate 8 - 9, you'd borrow 10, yielding an intermediate calculation of 18-9, finally yielding 9. Which looks weird, but consider that if you decrement from 0 in this system, you'd end up with 9. So, 9 is how you'd represent -1 in this system. It's just like using 0xFF for -1 (and 0xFE for -2) in 8-bit binary.
Borrowing 10 and borrowing 256 are similar in that each value is 1 greater than you can represent using the number of digits (or bits (binary digits)) in your inputs and outputs. In other words, it's what you get when you add a digit (or bit).
If I remember correctly from classes long ago, you
can build CPUs that use 255 for the borrow, and that's ones-complement. It's even less intuitive to work with, though.
Quote:
And how does the system recognize that this is now a negative number?
It doesn't. This is a common source of bugs.

If you're a software developer writing code for a 16-bit processor, you have to be mindful of the fact that signed values can only range between -32,769 and +32,767. For example: 20,000 + 20,000 = 7,233 = surprising behavior from your code. But as long as your values stay within the allowed range, and you check for underflows and overflows, everything works fine.
Quote:
I also think I have come to a point where I need to start learning about the different data types and how they are handled. I understand what signed or unsigned means but I have no idea how they are identified or used. Ive also heard the term float which I need to learn about. Any good resources/reading material on this subject?
Think of "signed" and "unsigned" as rules for doing arithmetic. The software developer chooses one or the other depending on the tradeoffs. If negative numbers are expected, signed values are used, and you get the -32,769 and +32,767 range I mentioned earlier. If negative numbers are NOT expected, unsigned values are used, and you a usable range of 0-65,536.
"Float" is shorthand for "floating point" and it refers to the fact that the decimal point can float around in the number. I never realized how silly that sounds until just now. But it's true:
http://en.wikipedia.org/wiki/Floating_pointAnd of course, in binary it's not called a decimal point...
Quote:
Also whats up with the bit shift left/right? This seems like it would completely corrupt the data that you are handling? For instance, this is in the speed density rom, it seems like this would completely render the data in ACC E and ACC D useless. It seems like its somehow transferring bits from E to D via the carry bit?
Bit-shifting often does carry the topmost bit like that (I think it's universal, but it could vary by CPU). Shifting left is equivalent to multiplying by two, and on CPUs without fast math, it's common to fabricate multiplication operations from shifts and adds. For example, shifting left twice and adding the original number is equivalent to multiplying by five.