Fundamental Concepts of Operating Systems

Section 1

This particular week I learned about the major functions of an operating system which can be broken down into several core areas that work together to manage system resources and support program execution:

 Memory Management – This function determines how much memory to allocate to each process and keeps track of which parts of primary memory are currently in use. It assigns memory when a program requests it and releases it once the process has completed. Throughout the course, I’ve also learned how closely memory management is tied to the storage subsystem. For example, virtual memory often relies on secondary storage as a backing store, and many systems allow files to be mapped directly into a process’s virtual address space. Because of this, memory usage and file system operations must be carefully coordinated to ensure smooth performance and data consistency.

Processor Management – This function allocates CPU time to processes as needed. It creates, schedules, and tracks processes throughout their lifecycle and removes them from the CPU once their work is finished. Processor management ensures that multiple tasks can run efficiently without interfering with each other.

Device Management – This function tracks all I/O devices connected to the system. It decides which devices to assign to processes and releases them when they are no longer needed. Device drivers play a major role here by serving as translators between the hardware and the operating system.

File Management – The OS keeps detailed information about files, such as creation dates, file types, storage locations, access permissions, and user ownership. It supports file creation, deletion, and modification, as well as directory changes and backups stored on secondary devices. File management is also closely linked to memory management because updates to the file system are often buffered in main memory before they are written to disk.

Network Management – This function oversees the flow of data entering and leaving the system. It handles network discovery, communication with external devices, and enforces restrictions such as firewall rules to protect the system from unauthorized access.

All of these functions operate within a layered hierarchy of subsystems. At the lowest level is the hardware; the CPU, RAM, storage drives, network interfaces, and I/O devices responsible for performing the actual computations. Above that is the kernel, the core of the operating system, which manages process scheduling, device control, system calls, and direct access to hardware resources. The next layer consists of software applications that rely on the kernel to perform their operations. Finally, at the highest level is the user interface, which provides the graphical or command line tools users rely on to interact with the system.





Section 2

This week helped me understand how essential processes and threads are to operating systems. A process is a running program with its own resources, including code, a program counter, stack, data section, and heap. As it runs, it moves through process states(thread states for the status of singular threads)  such as new, ready, running, waiting, and terminated. The OS keeps track of everything in the Process Control Block, which stores the key information needed to manage each process.

I also learned why systems choose single threaded or multithreaded designs. Single threaded systems are simple and handle one task at a time, making them easier to build and debug. Multithreaded systems, however, offer far better performance. Threads share memory and data, allowing tasks to run in parallel and improving responsiveness. If one thread blocks, others can continue running. This makes multithreading ideal for servers and entertainment software, while single threaded systems are best for small devices with one specific function like a calculator.

Another major concept was race conditions, which happen when threads access shared data at the same time and produce inconsistent results. To prevent this, operating systems follow the rules of the critical section problem: mutual exclusion, progress, and bounded waiting. These ensure that shared resources are used safely and fairly.

To handle synchronization, systems use tools like Peterson’s algorithm, mutex locks, synchronization hardware, and semaphores. Each helps coordinate threads and prevent conflicts.

Overall, this topic showed me how processes, threads, and synchronization form the foundation of operating system behavior and performance.






Section 3

Throughout this week I have gained a deeper understanding of operating systems theory and the fundamental concepts that form the foundation of OS design. A core function of an operating system is memory management, which allocates, organizes, and efficiently uses memory resources. My concept map illustrates the key techniques, processes, and addressing mechanisms involved in memory management.

Contiguous memory allocation assigns each process a single continuous block of memory. It includes fixed partitioning, which divides memory into predefined blocks but can cause internal fragmentation, and dynamic partitioning, which adjusts block sizes to fit processes and uses strategies like First Fit, Best Fit, and Worst Fit. Dynamic partitioning can also experience external fragmentation, which can be reduced through compaction.

Non contiguous memory allocation allows processes to use multiple separate memory areas, improving efficiency. Techniques like paging and segmentation manage memory more effectively. Paging divides memory into fixed size frames and splits processes into pages reducing external fragmentation. Segmentation organizes memory according to a program’s logical structure, such as code, data, and stack segments.

Memory is organized into levels to optimize performance, which is essential for effective memory management. Level 1 consists of registers, the smallest and fastest memory units within the CPU, storing information such as the program counter, immediate CPU instructions, and physical addresses. Level 2 is the cach, which provides extremely fast access to frequently used data for immediate processing. Level 3 is main memory (RAM), which holds active processes and data; it is volatile, so its contents are lost when the system powers off. Level 4 consists of non volatilee storage, such as hard drives or SSDs, which retain data even after shutdown.

This hierarchy also supports virtual memory, which allows processes to use more memory than is physically available. Virtual memory relies on moving data between main memory and non volatile storage, while the MMU maps logical addresses generated by programs to physical addresses in memory, ensuring correct and efficient access across all levels.

Both techniques are part of virtual memory, which enables processes to use more memory than is physically available. Virtual memory supports swapping, moving inactive process portions between main memory and secondary storage to optimize performance.

Finally, addressing ensures correct memory access. A logical address is generated by a program at runtime, while the physical address refers to the actual memory location. The MMU handles address translation, mapping logical to physical addresses securely.

Studying memory management has shown me how operating systems balance efficiency and flexibility, and this week highlighted how memory allocation and organization are handled.




Section 4

One key concept I learned this week is file system management, which allows an operating system to organize, store, and retrieve data efficiently. Since main memory cannot hold all user files, mass storage devices such as hard drives, SSDs, and NVME drives are used. The OS ensures that files are properly stored with essential attributes like name, identifier, type, location, size, access permissions, timestamps, and user information. It also provides fundamental operations such as create, read, write, delete, open/close, truncate, and reposition, giving users controlled access to data. Files can be accessed sequentially or directly, depending on the needs of the process.

Directory structures illustrate another fundamental OS concept: organizing resources for efficiency and accessibility. I learned that directories maintain relationships between files and allow scalable management. Common structures include:

Single Level: Simple but prone to naming conflicts.

Two Level: Isolates user files for security, but limits sharing.

Tree Structure: Used in most personal computers; scalable and flexible with subdirectories.

Acyclic Graph: Allows shared files among users without duplication, but requires careful consistency management.

General Graph: Supports cycles for flexibility but increases management complexity.

I also learned how operating systems handle I/O devices, which allow a computer to interact with external hardware. The OS coordinates these devices through multiple layers:

Device Drivers translate OS commands into device specific operations.

Interrupt Handlers manage signals from devices, allowing the system to update data or resume processes.

User Space I/O Software provides an interface for applications to access devices.

Kernel I/O Subsystem manages scheduling, buffering, caching, spooling, device reservation, and error handling.

From this week, I have learned that operating systems are designed to efficiently manage both hardware and software resources, as well as to organize and handle files across different types of directory structures.




Section 5

Throughout week 5, I’ve learned that protection and security are essential components of operating systems because they define how access is controlled and how systems defend themselves from threats. Domain protection is one of the main strategies used to manage and restrict access in a controlled manner. Its purpose is to ensure that users, programs, and processes receive only the minimum level of privilege they need to perform their tasks, following the principle of least privilege. By limiting access rights, the OS reduces the likelihood of intentional abuse, accidental misuse, or damage to system objects. Domains define the set of rights a user or process can exercise, and the primary mechanism used to implement this is the access matrix, which maps domains to objects and specifies the operations each domain is allowed to perform. Strategies like global tables, access lists, capabilities, and lock and key mechanisms all offer different ways of organizing and enforcing these access rules.

I also learned that protection extends beyond operating system structures and into programming languages through language based protection. This approach uses features of the language and the compiler to prevent vulnerabilities before a program runs. Compiler based enforcement checks type correctness and other rules before compiling code, memory safety prevents programs from accessing memory outside allowed boundaries, and sandboxing lets developers test code in isolated environments before deployment. Together, these methods help ensure that even if access rights are well defined, the code running in the system can't easily violate them.

While domain and language protections focus on defining and structuring access, security mechanisms defend systems against threats that try to bypass those protections. Vulnerabilities can come from program based issues or network based attacks, so the system relies on several layers of security. Cryptography protects data by encrypting it so that unauthorized users cannot read it even if intercepted. User authentication ensures only legitimate users access the system through passwords, biometrics, or multi factor methods. Firewalls filter network traffic and block unauthorized connections, and user training helps reduce risks caused by human error, such as falling for phishing attempts or unsafe computing practices.



Comments

Popular posts from this blog

Programming Languages In The Role Of An Accountant

First Scratch Experience