As we know that memory is that which stores the programs and these programs are used by the CPU for processing. Moreover, there are two types of memories first is the logical memory and second is the physical memory. The memory which is temporary such as ram is also known as the temporary memory, and the memory which is permanent such as the hard disk is also known as the physical memory of the system.
When we want to execute any programs then that programs must be brought from the physical memory into the logical memory. So that we use the concept of memory management, this is the responsibility of the operating system to provide the memory spaces to every program. Also, manage which process will be executed at that time.
Operating system translates the physical address into the logical address, and if he wants to operate, then he must translate the physical address into the logical address. This is the also known as binding. Means when a physical address is mapped or convert into the logical address, and then this is called as the binding.
There is also a concept which is also known as the dynamic loading, in this, a program doesn’t reside into the memory of the computer, and we must have to load that process for processing. So that when a process is loaded only when a request has found, then it is called the loading of the process.
Let us discuss some basic concepts which are related to memory management.
We’ll be covering the following topics in this tutorial:
Contiguous allocation
The contiguous allocation means the memory spaces are divided into the small and equal size and the different process uses the various partitions for running their applications process. Moreover, when a request has found, then the process will allocate the space. Moreover, in this, the adjacent spaces are provided to every process. Means all the process will reside in the memory of the computer and when a process will request for the memory then this available memory or free memory will be allotted to him.
However, there will be a problem when the memory which is required by the process is not enough for him or when the size of memory is less which is required for the process. So this problem is also known as the internal fragmentation. The main reason for the internal fragmentation is, all the memory is divided into the fixed and continuous sizes. So that if a process requires large memory, then that process will not be fit into the small area.
The second problem also occurs in the continues allocation. This is also known as the external fragmentation. In this when the memory is not enough after combing the multiple parts of a single memory. In this when the required memory is high after combining the various areas of memory, then this is called external fragmentation. So there are the following problems arise when we use the contiguous memory allocation.
1) Wasted Memory: the wasted memory is that memory which is unused and which can’t be given to the process. When the various process comes which require memory which is not available this is called as the wasted memory.
2) Time Complexity: there is also wastage of time for allocating and de-allocating the memory spaces to the process.
3) Memory Access: there must be some operations those are performed for providing the memory to the processes.
Process Address Space
Process address space can be well defined as the set of the logical addresses which the process can reference. When the memory is allocated to the program, then it is the job of the operating system to change the logical address into the physical address.
The address is of three types which are as follows:
• Symbolic Addresses are those addresses which we use in the source code. The essential elements of symbolic address space are- variable name, constants and instruction label.
• Relative Addresses are those addresses which are assigned at the time of compilation. When the compiler is compiling the program, the symbolic address is converted into the relative address by the compiler.
• Physical Addresses are those addresses which are generated by the loader when the program loads into the main memory.
The set of all logical address is known as logical address space, and the set of all physical address is known as physical address space.
Memory Management Unit (MMU) is a hardware device which does the runtime conversion of the virtual address to physical address. It makes use of the following mechanism:
• The value which is stored in the base register is added to the address which is generated by the user process
• The user never deals with the real physical addresses. In daily routine, the user program deals merely with the virtual addresses.
Static v/s Dynamic Loading
These are the two types of loading. The decision of which type of loading used made at the time when the computer program being developed. For the static loading of the program, the whole program will be compiled and linked by the compiler without leaving the dependency of the external program. Here, the linker combines the object program with the object modules. It can also include logical addresses.
For the dynamic loading of the program, the compiler compiles the program, and the reference of those modules provided which you want to add in your program during runtime and the pending or remaining work done during the time of the execution.
While using static loading, we will load the data or information into the memory so that we can start the execution of the process.
While using dynamic loading, the dynamic routines are stored in relocatable form so that they can be loaded into the memory when the program needs them.
Static v/s Dynamic Linking
Linking is also of two types static and dynamic.
In static linking, all the modules will be combined by the linker as they are needed by the program. The linker will combine them into a single program which can be executed. This is done so that it can avoid any dynamic dependency.
In dynamic linking, there is no need to link all the modules. Instead of doing this, during the time of compilation and linking, a reference will be provided to the dynamic module.
Ex:- Dynamic Link libraries (DLL) in windows system.
Swapping
Swapping is a concept in which we do swapping of a process from main memory to secondary storage and vice-versa. The swapping of processes from one memory to another is explained clearly in the diagram given below. It clearly explains that how this swap in and swap out thing works. This is done so that the memory which we get free through swapping can be used by other processes.
Swapping affects the performance of the system, but it is helpful or beneficial for the system when we want to run multiple processes in parallel. Due to this reason, memory compaction technique is another name given to swapping.
The total time taken by the swapping can be calculated by adding
1. The time is taken in moving the whole process from main memory to secondary disk.
2. The time is taken in moving back from secondary disk to main memory.
3. Time is taken by the process to regain its main memory
By adding all the above time, we can calculate the total time taken in the whole process.
Memory Allocation
In this, memory is allocated to the computer programs. Generally, main memory has two partitions: low memory and high memory.
• Low Memory is the memory in which in our operating system resides.
• High Memory is the memory in which the user processes reside.
The operating system makes use of two types of mechanism: single-partition allocation and multiple partition allocation.
1. Single partition Allocation
In this, we make use of the relocation-register scheme. This scheme is used for the protection of the processes from each other. Relocation register is the register which contains the value of the smallest physical address. On the other hand, the limit register contains the logical address value range. Note here that the value of logical address which the limit register contains must be lesser than the limit register.
2. Multiple Partition Allocation
In this, we will divide our main memory into several fixed-size partitions. Here, each partition has only one process with it. Whenever the partition sits idle, CPU will select the process from the input queue and load that particular process into the partition which sits idle. If the process has completed its execution, then the partition will stay idle, and CPU can use it for another process.
Fragmentation
Whenever we load the process into the memory and remove it from memory, we will get the memory space free. However, this free memory space will get fragmented into small pieces. Due to the presence of these small memory blocks, we are not able to allocate this memory to any other process as it doesn’t contain the required space of the process. This problem of the memory is called as fragmentation.
Fragmentation can be of two types- internal and external.
• Internal Fragmentation is the fragmentation in which bigger memory blocks are allocated to a single process. Due to this, some portion of the memory is left, which cannot be used by any other process as it doesn’t fulfil its criteria of space.
It can be reduced if we assign the smallest portion of the memory to the process, but the condition is that this smallest memory block must fulfill the space requirement of the process.
• External Fragmentation is the type of fragmentation in which the memory space satisfies the requirement of the process. The problem here is that the memory is not contiguous. So, the process is not able to use that memory.
It can be reduced with the help of the compaction technique. Through this technique, we will shuffle the memory blocks in such a way so that all the free memory space get merged and converted into a large memory block which can be used by the process. This is shown in diagram clearly.
So, the compaction technique reduces the wastage of memory created by the process of fragmentation.