Guess what, your college professors lied to you. 32-bit instructions are completely worthless. They are purely a marketing gimmick. I understand that programmers use 32-bit instructions all the time, my point is that programmers don't need to.
Think about this:
8-bit gives you numbers 0-255
16-bit gives you numbers 0-65535
32-bit gives you numbers 0-4294967295
8-bit is plenty enough for simple game logic, such as decrementing lives and going into game over mode when the number of lives is zero, or counting how many coins Mario has, since Mario never has more than 256 lives or 256 coins at once. Now calculating Marios actual gameplay physics requires numbers larger than 256.
16-bit is enough for level coordinates because one screen is 256 pixels long, and most games have levels that take up 16-32 screen legnths. 65536 is more than enough.
NES programmers were smart. Instead calculating game physics entirely using 16-bit values, they calculated the x and y velocity in 8-bit, and added the 8-bit velocity to 16-bit world coordinates. Like this:
SNES with it's 16-bit instruction set, doesn't have to do all this crap.
But here is the issue. The people who programmed SNES, unlike the NES, were stupid. They did EVERY LITTLE THING WITH 32-bit values. So instead of the example above, SNES programmers did this:
Then there were programmers who were even stupider, who not only use 32-bit math, but left the 65816 in 8-bit mode!!!