Thursday, April 24, 2025

CST 334 - Week 8

I spent most of my time reviewing the materials to get ready for the final exam. The final covers two big topics, concurrency and persistence. I looked back at my notes, lecture slides, and the readings. For concurrency, I reviewed threads, race conditions, locks, semaphores, and condition variables. I practiced writing and reading code with pthread_create, pthread_join, and mutex locks. I also made sure I understand classic problems like the producer-consumer and reader-writer locks. It was helpful to go back to the examples we discussed in class. For persistence, I reviewed how the operating system interacts with I/O devices like hard drives. I studied how disks work, how files and directories are stored using inodes and blocks, and how the OS reads and writes files. I also practiced calculating disk I/O times based on seek time, rotational delay, and transfer rate.

Looking back at the course, I learned so much about how the operating system works behind the scenes. Before this class, I didn’t really think much about how memory is managed or how threads share resources. Now I understand how important it is to use synchronization tools correctly to avoid bugs like race conditions. The memory allocator project and the concurrency lessons were challenging, but they helped me grow a lot. I feel more confident working with low-level system topics now.

At the start of the semester, I was nervous about working in C and understanding systems programming, but now I feel more comfortable reading and writing C code. One thing that changed for me is that I now pay more attention to performance and safety in code, especially with memory and threads. I still wonder how these concepts apply in real large systems, like in cloud computing or distributed systems. There are some topics that I want to dig into more, like how semaphores are used in real systems, how memory allocation works in modern programming languages, and how disk scheduling affects performance in cloud environments. I’ll try to keep learning about these areas through small side projects or by reading more technical blogs and documentation after the class ends.

Tuesday, April 22, 2025

CST334 - Week 7

I learned how operating systems handle persistence, which means keeping data safe even after the computer is turned off. I learned that devices like hard drives and SSDs are used for this, and the OS manages how data is read, written, and stored.

One important topic was how the OS talks to I/O devices. There are two main ways: polling and interrupts. Polling checks the device over and over, which wastes CPU time. Interrupts are better in many cases because they let the CPU work on other things and only respond when the device is ready. I also learned about DMA (Direct Memory Access), which helps transfer data between memory and devices without using too much CPU.

We also studied disk scheduling, which is how the OS decides which data to read or write first. Algorithms like SSTF (Shortest Seek Time First) or SCAN (Elevator Algorithm) help reduce wait time by moving the disk head more efficiently. I thought it was interesting how the OS doesn’t always use the first-come-first-serve method for better speed.

Another big topic was RAID, which uses multiple disks together to improve speed or make data safer. For example, RAID-0 stripes data across disks for better performance, but RAID-1 makes copies for safety.

We also covered how the file system organizes files using inodes, directories, and bitmaps. Files are stored in blocks, and the OS keeps track of them using data structures like superblocks and inode tables. System calls like
open(), read(), write(), and fsync() are used to manage files. 
I'm still trying to understand more about the file system implementation, inode structure, block pointers, and how multi-level indexing works for large files. I feel I need more time to review these details to fully understand how the file system organizes and accesses data.

Tuesday, April 15, 2025

CST334 - Week 6

Throughout this week, I learned about semaphores, which are a tool to help threads work safely when they share data. Semaphores use a counter and two operations: wait (also called down) and post (also called up). When a thread calls wait, it tries to take a resource, and if the counter is zero, it has to wait. When a thread calls post, it gives back the resource and wakes up any waiting thread. A binary semaphore is like a simple lock, where the counter is either 0 or 1.

One example we saw is the producer-consumer problem, where one thread puts data into a buffer and another takes it out. If they don’t use semaphores correctly, they might use the same space at the same time or miss data. So we use semaphores to manage empty spots, full spots, and to make sure only one thread changes the buffer at a time.

We also learned about reader-writer locks, where many threads can read at the same time, but only one can write. This helps when we want more performance while still avoiding bugs.

Lastly, I also watched a lecture video about the Anderson-Dahlin method. I still need more time to fully understand it, but I know it talks about avoiding deadlocks by carefully scheduling threads based on their potential lock usage. I'll make sure to review this one more time next week.


Tuesday, April 8, 2025

CST334 - Week 5

We covered topics about concurrency, threads, and how to use locks and condition variables to manage shared resources safely. These are interesting topics to learn how a single program can actually have multiple threads running at once, and each thread can do a different task. This helps the program run faster and use the CPU more efficiently, especially on multi-core systems.

One of the important topics that I learned is that shared memory between threads can lead to problems like race conditions. For example, when two threads try to change a variable at the same time, the result might be wrong. To solve this, we can use locks to make sure only one thread can access the critical section at a time. I also learned about condition variables, which are useful when one thread needs to wait for another to finish something. These tools help keep threads in order so they don’t mess up shared data.

I still need more time to go over this week's topics since we had a mid-term exam. I spent most of the time preparing for the exam at the beginning of this week and didn't have enough time to study for this week's topics. I'll go over the reading and slides to better understand for those topics.

 

Tuesday, April 1, 2025

CST334 - Week 4

We explored memory management in operating systems in more depth this week. One of the main topics was free space management, where we explored how memory is given to programs and how unused space can cause problems. We learned about memory allocation strategies like first fit, best fit, and worst fit, and how they affect fragmentation. We also talked about how memory can be split and merged (called splitting and coalescing) to use space more efficiently.

We moved on to paging, which is a way to divide memory into fixed-size pieces to avoid external fragmentation. Paging uses a page table to map virtual addresses to physical addresses. This lets programs use memory even when there’s not enough space in one block. But checking the page table every time can be slow, so systems use a Translation Lookaside Buffer (TLB), which is like a shortcut that stores recent address translations to make memory access faster.

We also learned about more advanced page table methods like multi-level page tables and inverted page tables. This was a bit challenging topic for me but I understand that these help save space because a single linear page table can be too large. We also covered page replacement policies like FIFO, LRU, and optimal replacement. These are strategies to decide which page to remove from memory when new data needs to be loaded. I found LRU interesting because it tries to keep the most recently used data in memory.

Tuesday, March 25, 2025

CST334 - Week 3

I learned several important concepts in memory management and address translation this week. I explored the idea of address space, which is how the operating system manages memory for each process. Each process gets its own isolated memory area, preventing one process from interfering with another. This abstraction allows the operating system to allocate memory more efficiently, helping programs run smoothly by ensuring each one has its own dedicated memory.

I also studied the C Memory API, which includes functions like malloc, free, and realloc. These functions allow programs to dynamically allocate and manage memory during runtime, especially on the heap. The ability to allocate, use, and release memory as needed is crucial for efficient memory management in programs. Through the programming assignment, I learned about memory allocation and management at a low level, focusing on how memory is divided into chunks and how memory is allocated, freed, and coalesced. I understood how to find the first available chunk of memory in the pool, how to split chunks to fit smaller allocations, and how to combine adjacent free chunks into a larger one. This ties into how memory is managed in C using functions like malloc and free. It was quite a challenging assignment but I enjoyed it.
 
I also went through address translation, which involves converting logical addresses, used by a program, into physical addresses in memory. This translation is done by the operating system through mechanisms like segmentation, ensuring that the program's address space maps correctly to the physical memory available on the system.

Tuesday, March 18, 2025

CST334 - Week 2

I learned about processes and how the operating system manages them. A process is a running program, and each process has its own memory, registers, and unique process ID (PID). The operating system creates processes using system calls like fork(), which makes a copy of a running process, and exec(), which replaces a process with a new program. Processes go through different states such as new, ready, running, blocked, and terminated depending on what they are doing.

I also learned about process scheduling, where the operating system decides which process gets to use the CPU. Scheduling can be preemptive, where a process can be interrupted, or non-preemptive, where a process runs until it finishes or waits for an input. Different scheduling algorithms exist, like First Come, First Served (FCFS), Shortest Job First (SJF), Round Robin (RR), and Multi-Level Feedback Queue (MLFQ). MLFQ is interesting because it adjusts a process's priority based on how it behaves, making it efficient for different types of workloads. After going through these topics, I now have a better understanding of how the operating system handles multiple processes and ensures efficient CPU usage.

CST 334 - Week 8

I spent most of my time reviewing the materials to get ready for the final exam. The final covers two big topics, concurrency and persistenc...