Skip to main content

How To Measure Modularity

A module is a set of parts that can be used to build a more complex system. How parts can be set or grouped together is based on some considerations. How optimised our module or how good the modularity level of our system is, are our questions.

Several aspects are very common when we want to measure the modularity of our system or software: cohesion, coupling, and connascence.


Cohesion

It is the indicator of whether we efficiently group some parts together. A cohesive module means all parts in the module are well coupled. If we break a cohesive module in our code into pieces or smaller modules, that will lead to an increase in coupling across modules and a decrease in the readability of the code.

There are a few types of cohesion based on the cause of cohesiveness such as functional, sequential (input-output relation), procedural (execution order), logical, or temporal. One that is not strongly related to the functional aspect is logical cohesion. For example, we may group several unrelated functions together into a module just because the functions are similarly used to manipulate a certain data type.

There is a value used to measure cohesion level called LCOM (Lack of Cohesion in Module). The value is equal to the sum of sets of methods that are not shared via shared fields or variables. For example, a class has a function, called X, that only accesses one variable; and also another function, called Y, that only accesses different other variables. This will result in a high value of LCOM which is discouraged.


Coupling

Unlike cohesiveness which sometimes can be quite subjective, coupling can be measured by the number of connections going in and out of a part. Afferent coupling is the number of connections going into a part. Efferent coupling is the number of connections going out to other parts.

There are three attributes related to coupling which are abstractness, instability, and distance from the main sequence.

Abstractness is the sum of abstract elements (e.g. interface, abstract class) divided by the sum of concrete elements (non-abstract components) or actual program code. Too many abstractions can confuse developers in how to work with the code.

Instability is a ratio between the number of efferent coupling and the total of coupling (efferent and afferent). So, instability can be high (close to 1) if efferent coupling is far greater than afferent coupling.

The main sequence is the ideal relationship between abstractness and instability. Let's see the following picture that explains everything.



The value of the distance from the main sequence can be approximated as the absolute value of abstractness + instability - 1. If our code has many abstractions and also the efferent coupling is very high, our code is in the zone of uselessness. If we have a big code base with very little abstraction, the code is in the zone of pain.


Connascence

Connascence is the correlation of one part with other parts in a system concerning maintaining system correctness caused by certain modifications in one part. There are two categories of connascence, static (source-code level coupling) and dynamic (execution-time coupling).

  • Static
    • Name, like the name of variables
    • Type, like the structure of an object
    • Meaning, like the meaning of certain constant value
    • Position, like the order of function arguments
    • Algorithm, like an authorization mechanism
  • Dynamic
    • Execution, like the order of executing methods of an object
    • Timing, like the order of executing two separate processes
    • Values, like values in primary and backup databases
    • Identity, like objects communicated in a distributed queue

There are three properties related to connascence level which are strength, locality, and degree. Strength indicates the effort needed for a developer to refactor the coupling. Locality indicates how close modules are to each other. The degree indicates the size of the impacts of changes. Let's take a look at the following image that shows the strength of connascence.



Connascence of name is the lowest because of the easiness of refactoring and advances in today's code editor for replacing variables. Connascence of position has higher strength, imagine we change the order of arguments in a function, then we should refactor all parts that call the function.

All three properties should be considered when we design the modularity in our system. When there are far-separated parts that mean a low level of locality, it is better to have lower strength connascence. When the parts are in the same class that means a high level of locality, it is fine to have higher strength connascence. If we have parts with a high strength connascence, it is still fine when the parts are rarely implemented in the code, which means the degree of impact of refactoring is low.

There are some recommendations for improving modularity in a system

  • Break the system into several encapsulated elements.
  • Minimize connascence across encapsulated elements.
  • It is fine to maximize the connascence within the encapsulated element boundary as the consequence of the encapsulation and the minimization of cross-boundary connascence.

Comments

Popular posts from this blog

Configuring Swap Memory on Ubuntu Using Ansible

If we maintain a Linux machine with a low memory capacity while we are required to run an application with high memory consumption, enabling swap memory is an option. Ansible can be utilized as a helper tool to automate the creation of swap memory. A swap file can be allocated in the available storage of the machine. The swap file then can be assigned as a swap memory. Firstly, we should prepare the inventory file. The following snippet is an example, you must provide your own configuration. [server] 192.168.1.2 [server:vars] ansible_user=root ansible_ssh_private_key_file=~/.ssh/id_rsa Secondly, we need to prepare the task file that contains not only the tasks but also some variables and connection information. For instance, we set /swapfile  as the name of our swap file. We also set the swap memory size to 2GB and the swappiness level to 60. - hosts: server become: true vars: swap_vars: size: 2G swappiness: 60 For simplicity, we only check the...

Rangkaian Sensor Infrared dengan Photo Dioda

Keunggulan photodioda dibandingkan LDR adalah photodioda lebih tidak rentan terhadap noise karena hanya menerima sinar infrared, sedangkan LDR menerima seluruh cahaya yang ada termasuk infrared. Rangkaian yang akan kita gunakan adalah seperti gambar di bawah ini. Pada saat intensitas Infrared yang diterima Photodiode besar maka tahanan Photodiode menjadi kecil, sedangkan jika intensitas Infrared yang diterima Photodiode kecil maka tahanan yang dimiliki photodiode besar. Jika  tahanan photodiode kecil  maka tegangan  V- akan kecil . Misal tahanan photodiode mengecil menjadi 10kOhm. Maka dengan teorema pembagi tegangan: V- = Rrx/(Rrx + R2) x Vcc V- = 10 / (10+10) x Vcc V- = (1/2) x 5 Volt V- = 2.5 Volt Sedangkan jika  tahanan photodiode besar  maka tegangan  V- akan besar  (mendekati nilai Vcc). Misal tahanan photodiode menjadi 150kOhm. Maka dengan teorema pembagi tegangan: V- = Rrx/(Rrx + R2) x Vcc V- = 150 / (1...

Deploying a Web Server on UpCloud using Terraform Modules

In my earlier post , I shared an example of deploying UpCloud infrastructure using Terraform from scratch. In this post, I want to share how to deploy the infrastructure using available Terraform modules to speed up the set-up process, especially for common use cases like preparing a web server. For instance, our need is to deploy a website with some conditions as follows. The website can be accessed through HTTPS. If the request is HTTP, it will be redirected to HTTPS. There are 2 domains, web1.yourdomain.com and web2.yourdomain.com . But, users should be redirected to "web2" if they are visiting "web1". There are 4 main modules that we need to set up the environment. Private network. It allows the load balancer to connect with the server and pass the traffic. Server. It is used to host the website. Load balancer. It includes backend and frontend configuration. Dynamic certificate. It is requ...

Configure Gitlab SMTP Setting

Gitlab CE or EE is shipped with the capability to send messages through SMTP service as the basic feature to send notifications or updates to the users. The configuration parameters are available in /etc/gitlab/gitlab.rb . Each SMTP service provider has a different configuration, therefore the Gitlab configuration parameters should be adjusted according to the requirements. Some examples have been provided by Gitlab here . This is an example if you use the Zoho service. gitlab_rails['smtp_enable'] = true gitlab_rails['smtp_address'] = "smtp.zoho.com" gitlab_rails['smtp_port'] = 587 gitlab_rails['smtp_authentication'] = "plain" gitlab_rails['smtp_enable_starttls_auto'] = true gitlab_rails['smtp_user_name'] = "gitlab@mydomain.com" gitlab_rails['smtp_password'] = "mypassword" gitlab_rails['smtp_domain'] = "smtp.zoho.com" This is another example of using Amazon SES w...

API Gateway Using KrakenD

The increasing demands of users for high-quality web services create the need to integrate various technologies into our application. This will cause the code base to grow larger, making maintenance more difficult over time. A microservices approach offers a solution, where the application is built by combining multiple smaller services, each with a distinct function. For example, one service handles authentication, another manages business functions, another maintains file uploads, and so on. These services communicate and integrate through a common channel. On the client side, users don't need to understand how the application is built or how it functions internally. They simply send a request to a single endpoint, and processes like authentication, caching, or database querying happen seamlessly. This is where an API gateway is effective. It handles user requests and directs them to the appropriate handler. There are several tools available for building an API gateway, su...

Running CI/CD Pipeline with GitLab CI

GitLab allows us to deploy CI/CD pipeline runners on our own resources within our environment. This option is available not only for the self-hosted plan but also for the cloud service plan (gitlab.com). With this setup, unlike GitHub Action, we can avoid incurring additional costs for extended pipeline runtime. This is because we can deploy the runner on an on-demand server and optimize its usage. GitLab CI offers several options for setting up resources to run CI/CD pipelines. A runner can be configured to handle jobs for specific groups or projects using designated tags. It can also be set to use different executors, such as Shell, Docker, Kubernetes, or VirtualBox. A comparison table of the supported executors is available in the executor documentation . Some executors offer greater flexibility and ease of use, while others may be more rigid but enhance server security. Installing the runner in our machine For example, we will deploy the runner on an Ubuntu serve...