Have a Question?

00266: How much memory does the Unix Data Server use?


How much memory does the Unix Data Server use?


The Data Server actually takes up very little memory. It’s also difficult to calculate exactly how much is needed, as it’s running as a process. However, here are a few guidelines: 

1) The memory required for codespace is only needed once for the parent process. That means that when the initial Data Server process spawns children processes for each connection, the children processes will share the same memory as the parent for the codespace. Therefore, the memory required for loading the process is only counted once. 

2) Each child process will require some memory set aside for every file opened. However, this memory is very small, as very little is stored in it–just the file name and a few more pieces of information that do not require much memory. 

3) Memory requirements are also very small for reading data, as the Data Server only has one record buffer that it uses to store the data in. 

Therefore, as a rough guess it would look something like: 
a) Size of Data Server is about 600kb–this part is taken up once 
b) Let’s say we have 50 workstations that each have 10 files open, that’s in the ballpark of 50*10*100=50kb 
c) Then the 50 children all have a 100 byte record open–50*10*100=50kb 
d) The total would be around 700kb 

As you can see, the Data Server really doesn’t take up much memory. 

It really is relative to the architecture of interest. When a parent forks off a child process, its address space is cloned. So both the parent and child are sharing the same pages (page sizes can be different across different architectures, on VAX Ultrix it was 512 bytes, MIPS RISC Ultrix it was 1024). The text (program code) is shared read (usually). However, if you add up the virtual address sizes for all the processes running the same program, you’ll be misled if you don’t really understand what you’re seeing. 

The non-text pages are set copy on write. So if the child attempts to write to one of those pages, a new page is allocated, and the data is copied from the parents page to the new page for the child. The heap grows, but usually is never shrunken down. 

To make a long story short, it is all virtual. If you don’t ever call a section of code, it may never be paged in from disk. Then there are the shared libraries, if the operating system supports them, and we built using them, then there is yet another variable, as those can be shared across other non-related programs. So, as stated above, if you have swapped out processes, then you need more memory. 

Last Modified: 01/29/1998 Product: PRO/5 Data Server Operating System: Unix

BASIS structures five components of their technology into the BBx Generations.

Table of Contents
Scroll to Top