Hacker News new | past | comments | ask | show | jobs | submit login
Simple CPU (simplecpu.com)
617 points by michael_fine on Oct 19, 2014 | hide | past | favorite | 62 comments



Here Feynman explains the basic logic behind building of all of the computers. He is inspiring even if you know more details about the topics he talks about, because he stimulates you to a fresh view. The youtube title is, IMHO, misleading. The lecture is about the basic hardware concepts and not about heuristics, so I think it fits perfectly with the OP.

http://www.youtube.com/watch?v=EKWGGDXe5MA

Edit: At the start the original title is visible it's "Richard Feynman: The Computers From the Inside Out" and the lecture is from 1985.

The major quote:

"Computers are not primarily computing machines, they are actually filing systems."


Thanks for that. Man, Robin Williams would've played the heck out of a Feynman movie..


Saying that is torturous


Listening to Feynman talking about computers on a sunday evening. Life is good.


Listening to the Q&A, I am surprised that, despite how much computers have changed in the last 30 years, how little has changed:

https://www.youtube.com/watch?v=EKWGGDXe5MA&feature=youtu.be...


Thank you


This is awesome.

"You can scroll through memory as well" --> Would be nice to have a scroll-bar because I missed this at first and wondered why there were only 6 bytes of memory. (Would also be nice to see a bit more of the memory at once, although then the instructions might be easy to miss. Maybe a "show more memory" check-box?)

Would also definitely be nice to reset / modify the instruction pointer.

And of course the ability to save/load the CPU state (eg. simple copy/paste to the clipboard.)

Next steps would be to teach about basic I/O, since that's a fundamental part of any computer. (Start with some simulated LEDs/toggle switches that map directly to a special address space?)


A button to execute a single instruction would also be nice.


It's very similar to Charles Petzold's "CODE" book. An excellent read.

http://www.charlespetzold.com/code/


That book made it all snap together for me. It really is fantastic. After reading it, I felt I knew enough to build hardware from first principles, even I'd be slow and terrible at it.


I found it easier to understand how the d-latch worked from this they Petzold's book, probably because I could see the changes vs working it out and having to convince myself I was reading it all right.


Reminds me of this course: http://nand2tetris.org/

Easily accessible, fun, and if you find it too simple, you can always introduce others to it.


I also really enjoyed the course. The book behind this course is called "The Elements of Computing Systems". I've been recommending it to lots of people.


Very nice.

One request: don't introduce the concept of 'nibble' on the first page. It's jargon that no one needs in this general introduction. And it just sounds silly.


I don't know. I heard about nibbles right at the beginning and then promptly forgot them. I even remember thinking that the word was silly. But it set me up for a very nice epiphany the first time I saw bytes written as hex and played a part in my instantly grasping that.


I'd be curious to poll people's experiences, but, I can say that I had never heard the term until today and I've known this stuff for a number of years.

And it sounds silly and adds nothing. That too.

(edit: my education on this stuff was in the 00's. Sounds like it was more common in the 80s. I'd definitely support obsoleting the term entirely now, though! It adds unnecessary complexity.)


I suppose that nybbles are not encountered much now due to the fact that most systems are byte-oriented, but back when 4 and 8-bit CPUs were far more common relative to bigger systems, the term appeared quite often in documentation - especially when talking about BCD formats.


It's yet another term that you can get by without knowing, and you'll probably be fine. But understanding why it's relevant to even have a word for it is the important insight. We should just as soon obsolete the term flip-flop.


> We should just as soon obsolete the term flip-flop

And replace it with what? Bistable multivibrator?


My algorithms teacher (3rd year paper) mentioned them when doing a few lectures on information theory, data compression and encoding, then set an assignment on implementing LZW or LZ78 with a bit-packer (and the inverse). The spec allowed you to either compress via bytes or nibbles.

That was last year, I also got the impression that he liked to fill likely gaps that he could do with small detours.


While I haven't heard or thought about the term in years, I do remember it from learning about computers in the 80's. I guess it made more sense back in the days when binary math and bit twiddling where a more common technique and keeping close track of the number of bits you where using was important.


I first read nybble/nibble in a book in the late 80s when I was a kid.

I don't think I read or heard about it again for over a decade. It's never sounded silly to me, or at least no more silly than "byte", and it's at least slightly useful to explain converting hex.


0b11110000 == 0xF0 So two hexadecimal digits represent a byte. Likewise, one hex digit represents a nibble. I'm much more in favor of deprecating WORD as a synonym for byte


> I'm much more in favor of deprecating WORD as a synonym for byte

Its not a synonym for byte. In modern systems, a word is typically 4 or 8 bytes.


Anyone who cares about this topic is welcome to contribute to https://en.wikibooks.org/wiki/Programmable_Logic/Verilog_for...

I started it at the request of some HN users, and I hope it continues to hopefully enlighten some software folks who wonder how the CPU they rely on actually works.


I'll rant for a moment here, because I think this is something that the Simple CPU link has the potential to get right, and that I think is a common pitfall with digital design tutorials.

I think one of the pain points that you commonly see with tutorials is the concept that the hard part is the language. As a digital designer, I think that's explicitly not the case: just like teaching functional programming, the syntax is not the challenge, but the mode of thinking. Digital design is a skill that really requires deep thought about what a given piece of syntax will compile down to; it's possible to skim over the "building blocks" of software (for instance, it is possible to be a very good JavaScript programmer without knowing what the JIT will transform your code to), but in order to build effective (or even possible-to-synthesize!) hardware designs, I believe that it is very important to start with the fundamentals of how hardware actually works.

So, I plead: if you have cause to write a digital design book, don't write it as a "Verilog for Computer Scientists"! Verilog is a big language, and not all of it is good for writing hardware. What you really want is "Digital Design for Computer Scientists".


Yes, the two key hurdles to HDL are (in my opinion):

- Being able to think parallel. Software is sequential by default, HDL is parallel by default.

- Understanding what you want to build. If you can't draw what you want, you probably can't write good HDL. There's a reason it's called Hardware Description Language, and not "JS2GATES". In a sense, HDL is really just text-based schematics.

Case in point:

http://www.allaboutcircuits.com/vol_5/chpt_7/8.html

When you think "HDL" you probably think of Verilog or VHDL, but SPICE netlists is also an HDL.


>Being able to think parallel. Software is sequential by default, HDL is parallel by default.

Totally. When working through the nand2tetris stuff, several of my designs fell on their face because I started thinking of multistep circuit logic as sequential, forgetting that junk results aren't discarded by the "flow" but that their values are always "evaluated" and fed into downstream logic.


I'm not sure how and where or indeed if to contribute, but as a programmer who learned digital electronics (stack of free 74xx chips) before diving into programmable logic, one thing that really helped make these things click for me was thinking about hardware as transistors and learning the very basics of electricity and related physics. This helped me see digital electronics not as a sequential process but as a consequence of natural laws, like something dropping due to gravity.

Thinking of a transistor as a voltage controlled switch (obviously simplified to the point of being wrong but works for understanding digital) and seeing how you can use those to build logic gates, and then use gates to build muxes and flip-flops and so on up was essential for my understanding HDLs.

(Incidentally, all that has also made how computers and machine code work much clearer to me).

I don't think I could've really done it without that foundation. And I certainly couldn't have done it by just thinking of Verilog with my programmer's idea of concurrency.

It's obviously very low level and you'd have to gloss over a bunch of things, but I'd definitely start with transistors if I had to explain programmable logic to somebody.


I had a similar experience. Digital logic is very clean and neat, but the part where things broke down for me are things with feedback loops like latches. It's a bit tricky to tell what's going to happen in such devices without some analog background.


For me, I think visualising the timing really helps - and remembering that signals and gates do not act "instantaneously"; there is always a propagation delay. This is also why shift registers work. Things like edge-triggered flip-flops would be impossible if gates and signals were infinitely fast.


If you want a comprehensive read on this topic, check out the book Code by Charles Petzold: http://www.amazon.com/Code-Language-Computer-Hardware-Softwa...


I thought this was pretty clever:

  00000001 <-- this line is both an instruction and data.
  00000011
  00000001 <-- this line is both data and an address.
  00000011
  00000111
  00000000
  11111101
A program that doesn't work if you move its location in memory is pretty hilarious.


Overlapping code/data is a trick used in practice in highly size-constrained systems, like older game consoles (e.g. http://andrewkelley.me/post/jamulator.html#dirty-assembly-tr... , discussed here at https://news.ycombinator.com/item?id=8405214 ) and in the tiniest categories of the demoscene.


You may be interested in the story of Mel: http://www.pbm.com/~lindahl/mel.html


That was extremely common in 8-bit days. But then again it was commonplace to destroy userspace upon loading of programs and you finished your session by either resetting the computer or turning it off (1~2s boot times helped though).


Just goes to show how good my reading comprehension is. I completely failed to realize that the subtract operation was operand B from operand A, not the other way around.


I was attempting to create the most simple infinite loop that I could, and found that the instructions do not work in the way that I thought they would. Here are a couple programs I wrote. If someone could explain why they work the way they do..?

00000011

00000100

00000000

00000001

Looking at this, I expected that it would read line one, then move the value addressed by line two to the value addressed by line three. i.e. set the instruction pointer to 00000001. This didn't work. What did was:

00000000

00000000

00000000

00000011

00000111

00000000

00000001

This program advances to line 4 then remains there. Could someone explain why program two is an infinite loop, but program one isn't? Thanks!


Your first program does as you expect, but then the instruction pointer gets incremented by 3, so it goes on.

If you modify the last line to "11111101" then when the instruction pointer is increased by 3 it will overflow back to 00000001 and you will get your infinite loop.


The CPU runs the opcode(instruction), then adds 3 to the IP. Therefore, your first program sets IP to 1, then it gets 3 added to it, setting IP to 4. Your second program is exactly the same, but setting IP to 4 means infinite loop.


Hi, I made this site.

The problem is that it moves that value into the instruction pointer, then the instruction pointer increments itself by 3.

the instruction pointer always increments itself by 3 after every instruction.


Thanks all, for the explanations :)


As nothing is moving visually it seems to be doing nothing. 0001: 0000 0010: 0000 0011: 0000 0100: 0000 0101: 0000 0110: 0000 0111: 0011 1000: 1010 1001: 0000 1010: 0001


At 1st year of CS they taught us to design and understand the simple machine, a simple computer with just a CPU and a control unit. They also taught us how MIPS architechture worked (quite simplified). I still remember the struggle to rotate a matrix in mips.


I'm in my 4th year of University and this just explained the concepts in the best way I've ever seen, far better than any of my lecturers. Thank you.


The memory scrolling doesn't work in FF - you can only see the initial 6 locations... And he's fixed that. Awesome!

Also - it would be very neat to have a "reset" on the Instruction Pointer, so that you don't have to refresh the current page if you want to start your program over...


This is fixed in the sense that I can now scroll through the memory, but the page scrolls at the same time, which makes it a little awkward.


The gates with the simple toggle functions are really good and simple to use. It reminds me of playing around with redstone in minecraft.

Knowing most if not all of this already I cannot say how well it teaches, but it does describe what I spent much longer on learning.


I really like this presentation and content. It's accurate, interactive, and brief. Feynman, I think, would be pleased. What these few pages describe are precisely the "atoms" that underly computation, which is truly counter-intuitive, remarkable, and (these days) ubiquitous. (Which describes a lot of things! Black body radiation is my current favorite).

Two suggestions to the author: first, identify yourself on the site! Second, interpret "0" as "STOP" or "RESET" to prevent the simulation from running to infinity. Alternatively, adjust the UI to indicate that the PC is off-screen.


I wish they would give us such a summary at the college before talking about all those confusing stuff. Excellent work!


In the third semester at university in Germany there was a course "Introduction to technical computer science", where one would learn about things like Karnaugh Diagrams, compression algorithms, D-Latches and so on and built a 16bit optionally pipelined RISC CPU in lab assignments. The professor teaching the course had developed custom signal processors for the LHC, so that was pretty cool aswell.


They did for us; right in the first two semesters we had 4–6 hours a week just low-level stuff. They also introduced us to algorithms and data structures with Pascal, not Java (which I'm very thankful for).

It was quite enlightening to see over the course of three weeks or so how a (simple) CPU could be built from ever-larger building blocks, but starting at the very beginning with transistors.


This is my first real introduction to uber-low level computer science. I managed the following super simple program:

1. 00000000

2. 00000001

3. 00000000

4. 00000001

5. 00000010

6. 00000011

7. 00000011

8. 00001010

9. 00000000

10. 00000001

The CPU jumps to the instruction at #4, which tells it to add the value of #2 to #3. The instructions at #7-#10 reset the instruction pointer to 00000001, so that it'll increment to #4 again. The result is that #3 is continuously incremented up by one.

I've usually preferred to stick to really high level stuff (Like JS/Node.js), but trying this out makes me want to go learn assembly or something!


This makes it into a bit of a game:

https://microcorruption.com/


I feel like there's a few tutorials needed in between writing your first opcodes and passing the first level of microcorruption ;)


Nah, the first few levels of microcorruption is actually really really simple.


which tutorials?


I tried to sign up, but I got a "Sorry, but this form is no longer accepting submissions." :(


The form comes up now (I had followed it after you replied and got your result, I guess somebody fixed something).


This is really cool.

But. (Is there always a but?) it's very unapproachable. It has no welcome mat.

"Let's dive in" is normally an appreciated attitude, but here it makes it sound like we are coming in in the middle, and have missed the introduction. So, what is this thing? The page doesn't really say. There's an about page, but most people are too lazy to click. They want to see the value explained immediately, clearly, succinctly, on the landing page... or they're gone.

If they do click the About page, the writing is stilted, formal. But they don't, in most cases, I suspect.

When there are clickable images, it's not clear they are clickable. The text indicates you should click, but there are these huge arrows right there, and the obvious interpretation of the text is that you're supposed to click the arrows to see the image do its animation. Nope.

CPU is very jargonish as well. A better title would be something like "how computers work."

Still, nice design, beautifully made, good chunking of stuff into bite-sized pieces.


Just as an aside the D latch you're showing is the more traditional NMOS circuit rather than CMOS (see the diagram at the bottom of page 510.

http://books.google.co.uk/books?id=5zUABAAAQBAJ&lpg=PA511&ot...


Very well done. I'll be passing it on as a great introduction to logic and binary operations.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: