mirror of
https://github.com/soconnor0919/eceg431.git
synced 2025-12-11 06:34:43 -05:00
Remove reflections
This commit is contained in:
@@ -1 +0,0 @@
|
|||||||
This project was somewhat easy, but it took a significant amount of time to bring back the concepts. It connected a ton to my discrete math and computer systems (in CS) course, and applied those concepts into creating fundamental logic gates, and extending them into (de)multiplexers and more gates. Setting up the environment was simple for me as well- I'm very comfortable in the terminal, and have configured editors for any possible file I could encounter already.
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
Project 10 was a nice shift from the low-level system building I'd been doing- finally working with language structure and grammar. The modular design philosophy I'd been using since Project 6 carried over well. The JackTokenizer and CompilationEngine split followed the same Parser/CodeWriter pattern from my VM translator, just dealing with a much richer set of tokens and grammar rules. Building the tokenizer was actually straightforward- it's essentially just string parsing that I've done plenty of times before. The comment handling was trickier than expected though, with multi-line comments that span across lines requiring state tracking between advance() calls.
|
|
||||||
|
|
||||||
The compilation engine was where my algorithm/programming language design and computer systems (in CS, 306) courses finally clicked into place. Recursive descent parsing is just grammar rules implemented as methods that call each other- it was elegant, but only once I saw it. Each production rule maps directly to a method, and the recursive calls naturally build the parse tree (which I just so happen to be doing in CS 308, programming language design!). The XML output requirement was actually great for debugging since I could visually inspect the parse tree structure in a browser and catch parsing errors immediately. I hit some tricky edge cases with expression parsing- operator precedence, unary operators, and making sure the tokenizer advanced at exactly the right moments for complex constructs like array access and method calls.
|
|
||||||
|
|
||||||
What really struck me was how this project revealed the hidden complexity of syntax analysis- something I'd always taken for granted as a programmer. Seeing how a parser actually breaks down source code according to grammar rules, handles precedence, and builds a structured representation gave me new appreciation for what happens before compilation even starts. Again, a great compliment to 308, as I'm learning the theory there and putting it into practice here.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Now we're starting to ramp up, and this is beginning to cause some internal conflict. Coming from computer science, and with my experience on low-power systems, I find optimization to be really important. Yet, in the ALU, I found it easiest to compute multiple values and then decide which one to keep, because you can't "build" a circuit after something is computed, rather, you have to prepare for all outcomes. Although the logic here seems Turing complete (without full analysis), it's a bit of a learning curve for me to think physically and not in terms of software.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
This project felt more aligned with the information from CSCI 306: using program counters, storing information in registers, using low/high bits to find what to address. Learning what the DFF did took a bit of reading and re-reading- still fuzzy on that, but the videos helped. The program counter was a bit confusing, as I tried to build it without a register first; once I figured out I needed to store the counter I went back to the register. The RAM was straightforward- just scaled, and I needed to split the address to propagate downwards and find which bank(s) to reference.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
This project was definitely the most difficult yet, but made sense after a while. I started with laying out the actions of what's going on functionally, turned that into pseudocode comments. Most of my time then turned into translating RISC-V style pseudocode into functional code, and debugging. Assembly makes sense, but only after a while. Documentation is kinda all over the place.
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
I say this each time, but this one was definitely the hardest so far. Glad to be done with HDL. The Computer was easiest- just combining three parts. Memory was simple enough once I laid it out, using the textbook to figure out offsets and partitions. I'm not a fan of bit splitting (referencing like it's an array), but it's good enough to work. Partly why I'm glad to move on.
|
|
||||||
|
|
||||||
That being said, the CPU was difficult.
|
|
||||||
*extra lines for dramatic effect*
|
|
||||||
|
|
||||||
Decoding instruction type? fine.
|
|
||||||
Computing jump type? fine, eventually.
|
|
||||||
Updating the program counter? fine. easy, even.
|
|
||||||
Connecting outputs? didn't know I could just reroute outputs- until I looked at my old code and realized I doubled it anyways. Bit of an airhead moment from me, I just forced it with an or. Worked, but at what cost? (I know the cost. two or's. that's literally the cost.) ¯\_(ツ)_/¯
|
|
||||||
|
|
||||||
ALU op? Lots of variables. But I made the ALU before- it's just plugging it in correctly.
|
|
||||||
|
|
||||||
The problem was taking the instruction and turning it into something the ALU could handle. Even though I had the instruction breakdown, and the connections, I just repeatedly hit my head against a wall. It really took a lot of trial and error, but I had the majority of this all done in the first two days, and spent the rest of the time on fixing this. Eventually, I just kinda fixed it by accident? (In the words of Ned Ladd: There are three steps to solving a problem. Write down what you know, Think really hard, Write down the answer. You're at step 2.)
|
|
||||||
|
|
||||||
In actuality, I drew out diagrams, connected the bit segments to where they needed to go, unraveled the mess of drawings I did create over and over, until I actually figured it out. Then it worked. But that isn't as fun of a response.
|
|
||||||
|
|
||||||
Thank you for coming to my ted talk.
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
Combining a reflection of 5.5 and 6 here...
|
|
||||||
|
|
||||||
I began the bunny/hare project recalling how to program with Python. I looked back at my CSCI 204 labs to find how I had worked with file I/O, and copied functions from that to start. I prefer to work in heavily object oriented, strongly typed languages with less room for experimentation, so going back to Python feels very odd. Once I got file I/O and lists working, the program was fairly straightforward.
|
|
||||||
|
|
||||||
Moving onto the assembler, I heavily utilized the template. I broke the program up into three classes: A parser, which reads the assembly commands and breaks them up into components; a symbol table, which acted as a lookup table for predefined symbols (registers, screen, keyboard etc), and a "code" class, which translates the mnemonics into their respective binary representations. Once those helpers were done, I began on the assembler. I store all the assembly that's generated into a list, which is later written to a file. Not the best implementation- more file writes would be safer, but this works for now, and lets me iterate.
|
|
||||||
I check for more commands, iterate through the commands, find their type, then either look it up or convert to binary (this is simplified, but gets the gist). once all commands are processed, I write the list to the file.
|
|
||||||
|
|
||||||
That's it! This took a bit of time, I optimized my classes a bit out of preference, but it works!
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
Functionally, my code for Project 7 (VM to ASM) is the same as Project 6 (ASM to Hack), at least in the beginning. I wrote the program to be very class-heavy, object oriented and well structured- allowing me to bring over the Parser class. The Parser class takes all commands into a list, and allows for iteration through the list. It allows for the command type to be identified and arguments to be separated. The CodeWriter class then writes the assembly commands as strings- they’re parsed by the parser, fed into the translator which then writes the correct sequence of commands into assembly (usually more than one per vm line). The translator creates an instance of parser and code writer, and combines their functionality to write the assembly to a file. Finally, the main function handles the aggregation of it all- taking in a file, verifying arguments, parsing directory contents etc.
|
|
||||||
|
|
||||||
My process throughout this was iterative- I used my base from project 6 as a good start. Although it didn’t necessarily build off of the prior project, the process was similar. I started porting the parser, mapping the commands to a lookup list. Then the CodeWriter, which actually converted the commands to their symbols. The CodeWriter took the most time- as I needed to fill in the gaps with pushing and popping from the correct places. The translator just connected the two, so that was easy. Along the way I brought over my main function, changed it to handle directories and updated its usage.
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
Building the VM translator for Project 8 felt like a natural evolution from Project 7, much like how Project 7 built conceptually on Project 6's foundation. The modular design I started in the assembler, with clean separation between Parser and CodeWriter classes, paid off. I extended the existing VM translator by just adding new command types to the parser and corresponding translation methods to the CodeWriter. The OOP structure made this expansion surprisingly straightforward and easy.
|
|
||||||
|
|
||||||
What struck me most was how this project revealed the hidden complexity of function calls- something I'd always taken for granted as a programmer. Seeing how the stack based calling convention actually works under the hood, with its intricate back and forth of saving state, repositioning pointers, and managing return addresses, gave me a new appreciation for what compilers do behind the scenes. It reminded me of CS 315 (operating system design). The recursive Fibonacci function was cool to see, as it showed parallels between the LIFO nature of the stack and the LIFO behavior of nested function calls- further explaining what I knew about recursion.
|
|
||||||
|
|
||||||
The most challenging piece was getting the function calling protocol exactly right. The sequence of operations for `call` and `return` commands required careful attention to detail, especially managing the temporary registers and ensuring the stack frame was constructed correctly.
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
Project 9 was a nice change of pace from the previous projects. Instead of implementing precise specifications like the assembler or VM translator, I got to be creative and build something interactive. The modular design approach I'd been using in Projects 6-8 carried over well - separate classes for Point, Food, Snake, and SnakeGame made the code clean and easy to debug. I hit several tricky issues though - the snake would leave white pixel artifacts when growing, food would sometimes spawn invisibly at screen edges, and hitting walls caused program crashes with "illegal rectangle coordinates" errors. Jack also doesn't have built-in random functions, so I had to find a random number generator online, as I didn't know where to start with making one myself. I found one using "Linear Congruential Generator" math that I didn't really understand, but it worked for spawning food in different locations. The modular design helped isolate these problems to specific classes rather than hunting through monolithic code.
|
|
||||||
|
|
||||||
The most rewarding part was seeing the entire stack work together. The Snake game runs on the CPU built in Project 5, uses the VM implemented in Projects 7-8, and gets compiled by the Jack compiler. Every keystroke travels through layers I actually understand now- from hardware input to game logic to screen output. The debugging was different too- instead of failed test cases, I had to diagnose visual glitches and gameplay behavior. Jack's limitations also forced some creative workarounds, like implementing a proper random number generator from scratch and handling the lack of color support by using outlined boxes for food instead of filled squares. It felt more like real software development with messier problems and less obvious solutions, but also more satisfying when everything finally worked smoothly.
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- Fundamentals of logic gates (or, and, not) made sense.
|
|
||||||
- Explaining how a NAND gate can make up the three fundamentals made sense.
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- The idea of how a multiplexer functions makes sense, but its building blocks were kinda fuzzy (required external research).
|
|
||||||
- Same with a demultiplexer, but I have a better understanding now.
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
The concepts of chapter 1 felt relatively simple. I understood logic gates and truth tables because of my prior courses, namely discrete math. The snippets of HDL and API descriptions also made a lot of sense to me, as I work on more complex API structures and took CSCI 306 with assembly covered. The idea of this course is exciting- I’ve worked on every individual segment before, but never have I gone from the fundamentals to a complete product. There’s usually layers of abstraction in my projects- working with existing frameworks, or stopping at a level where I show proficiency for a class assignment.
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- Tokenization makes sense- breaking code into symbols, keywords, identifiers
|
|
||||||
- Okay, parsing follows the grammar structure naturally
|
|
||||||
- The XML output is just a way to visualize the parse tree structure
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- Why XML output instead of just building the parse tree directly in some sort of object/other format
|
|
||||||
- The LL(0) grammar thing - seems like we're deliberately making Jack simple to avoid lookahead (tokenization again! like AI!!!)
|
|
||||||
- How do we handle operator precedence if Jack doesn't enforce it? Lots of parentheses?
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
This chapter shows how compilers work under the hood- tokenization breaks source code into basic elements, then recursive descent parsing follows grammar rules to build a parse tree. The XML output is just a demonstration tool to show the parser understands the program structure. This connects to my discrete math course- the grammars finally have a practical application beyond theory (finally! theory into practice! I thought this day would never come.). The LL(0) grammar design makes sense from an implementation perspective, though it feels wrong compared to real languages. Jack's deliberate simplifications (mandatory keywords, no operator precedence, forced curly braces) are clearly intended to make compiler construction easier to learn, but it ends up with a clunky syntax that I definitely wouldn't want to use in practice. Still, breaking this complex topic into manageable pieces (tokenizer + parser) makes a ton of sense.
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- The concept of binary addition, and subsequent operations
|
|
||||||
- Differences between a half and full adder, and how to connect them
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- The idea of an ALU made sense, but the concept of implementation was a little dense. I understand it now after working on the project for a bit.
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
As with the prior chapter, this felt like review of CSCI 306- where we went over integer/binary addition, subtraction, etc. The idea of an ALU was touched on, but I feel like this is going to diverge a bit here- less in theory, more of implementation in this course. Again, I’m excited to see where this goes.
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- Concept of clock cycles, separating out calculations into time units
|
|
||||||
- Flip Flops storing data
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- How do we maintain a clock with our current system? Everything just computes instantly so far, how do we modify what we have?
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
So my first thought- concurrency?? already? This doesn’t sound good. But then I read on- starting to make sense. Clock cycles are expected, yet I thought they’d have come up with the ALU (but the ALU is simple enough). I remember building flip flops in Minecraft (not surprisingly), so that concept also makes sense. Other than that, the RAM seems to scale exponentially, but from an implementation perspective, it just seems like a bit of repetition- so that will be simple enough once the basic implementation is met. We’re scaling up rapidly here, yet the concepts still somewhat remain in grasp.
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- Machine language is familiar, however the flavor is not.
|
|
||||||
- working with memory, memory vs data reg, loop/jump
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- Back to assembly… can we not? (kidding, but also not really.)
|
|
||||||
- Input/Output- we have predefined registers out of the ram range?
|
|
||||||
- Working with the screen seems intimidating- we’ll see how that goes. Any tips would be appreciated.
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
As with project03, this project with assembly language is starting to overlap with CSCI 306- working with assembly. Although I didn’t necessarily like that part of the course, I understand it, and I’m hoping that will overlap now. The only problem I see myself facing in the near future is shedding my RISCV assembly tendencies and learning hack, which seems to have weird syntax comparatively. Otherwise, the instructions from a high level here make sense and appear to be straightforward.
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- four main sections of a CPU: ALU, registers, control, I/O.
|
|
||||||
- screen made sense, as I had to look this up before
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- Where are we storing the ROM?
|
|
||||||
- is this OS going to have a scheduler?
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
Starting to get a little muddy, but I’m still holding on. The parts of the CPU I knew, connecting it all makes sense, but I’m too used to linux and I’m confusing layers of complexity. Abstracting all of the functions into sections of the CPU is what’s going to hold it together, I think. Now it’s just taking the API spec and turning into something functional. We'll see how this goes.
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- Lots of hardcoding, but it’s okay here (software brain scared)
|
|
||||||
- Symbolic references seem like a fun challenge. I’m sure I’m the only person to say this.
|
|
||||||
- This is just translation- Instruction X equals value Y. Just a bit more complicated. Spec will help.
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- I don’t like hardcoding. Magic numbers are bad.
|
|
||||||
- Are we going to end up optimizing later? Especially since this is a low-power CPU.
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
I feel like I say this every time- but this is where it’s getting good. We have the system, we have the language, now let’s make the system recognize the language (kinda but not really, you get what I mean). Again calls back to CSCI 306, but we didn’t write our own assembler, just explored how it worked. RISC-V, although simple, isn’t as simple as this machine. I’m worried about my resistance to magic numbers and hardcoding, although that’ll be necessary here. I also am not so sure where to start. But that’s a later problem.
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- The concept of a VM (I work with Java a ton, this made sense)
|
|
||||||
- Two-tiered architecture (translation between code and byte code, and another between byte code to machine language)
|
|
||||||
- Push and pop, the stack, eventually a heap (I guess not?)
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- Why are we using a VM here? Isn’t that inefficient? Shouldn’t we be emulating something C-like instead of Java-like? Java is sloooow.
|
|
||||||
- Again, as I read more- why a VM? This is just an extra layer! We’re not going for cross compatibility here, this is a weak machine!
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
I’m definitely (not) biased here. My software engineering brain has one of those rotating police sirens going off. Why a VM? That’s super inefficient here. We have a low power computer, we’re not optimizing for cross-platform code compatibility, we’re not even using a standard architecture! So why are we using this? To me, this looks like a useless intermediary step that exists just to complicate things.
|
|
||||||
|
|
||||||
That being said, learning about a VM language is a different story. And that’s how I’m rationalizing this seemingly bad decision. If this VM exists to teach us about programming language design, fine. But that’s better left for CSCI 308.
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- Okay, so we’re going up a step. We made the VM turn into Hack ASM, but we need something to make the VM language files in the first place- we’re not writing direct to VM.
|
|
||||||
- So we’re translating Jack (java-ish) to VM! New syntax to learn, but it’s only 8 library files. That sounds doable.
|
|
||||||
- The “calling protocol” makes sense- similar to the call stack I’m used to.
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- Again- VM bad, extra obscurity! Good for education.
|
|
||||||
- how do we really define functions? That’s the library stuff I don’t quite get yet.
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
We’re going up in the CS course stack! This time, a mix of 306 and 308 - computer systems, and programming language design. I understand how we are using functions now- converting a user-friendly programming language, to a vm intermediary, then compiling to assembly. This makes sense, and is exactly what I wanted to see in the course! I wanted to see this higher level connection of language to compiler to compiled- the VM step, no matter how much I disagree with it, is nice too.
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
What points were the most clear to you? (List up to 3)
|
|
||||||
|
|
||||||
- Jack syntax is pretty straightforward- Java-ish but simplified, which makes sense
|
|
||||||
- OOP concepts translate well, even on this basic platform
|
|
||||||
- The standard library provides the building blocks we need for larger programs
|
|
||||||
Grading comment:
|
|
||||||
What points were the muddiest and you'd like to talk more about? (List up to 3)
|
|
||||||
|
|
||||||
- Memory management seems manual- are we doing our own malloc/free equivalent?
|
|
||||||
- Graphics programming at this level feels primitive…
|
|
||||||
- How do we debug Jack programs effectively without proper debugging tools?
|
|
||||||
Grading comment:
|
|
||||||
Reflect on what you read.
|
|
||||||
Give me a sense about what is connecting to existing knowledge
|
|
||||||
-OR-
|
|
||||||
Your "ah ha!" moments
|
|
||||||
-OR-
|
|
||||||
What is hanging off by itself, not connecting.
|
|
||||||
|
|
||||||
Finally! We're at the application layer- this is where it all comes together. After building everything from NAND gates to a VM, we can actually write real programs that do useful things. The Jack language feels like a stripped-down Java, which makes sense given my background. It's cool to see how OOP concepts work even on this simple platform. I’m now realizing that I’ve built a complete computing stack. We went from basic logic gates to writing applications with graphics and user interaction- insane when you think about it. Every layer we built is now being used - the ALU for calculations, memory for storage, the screen for output. What still bugs me is the realization that this is still pretty primitive compared to modern development. No garbage collection, limited graphics, basic I/O. But I know that's the point- I understand what's happening under the hood now. When I write programs normally, there are so many layers of abstraction I never think about.
|
|
||||||
Reference in New Issue
Block a user