Skip to main content

Posts

Deploying a Web Server on UpCloud using Terraform Modules

In my earlier post , I shared an example of deploying UpCloud infrastructure using Terraform from scratch. In this post, I want to share how to deploy the infrastructure using available Terraform modules to speed up the set-up process, especially for common use cases like preparing a web server. For instance, our need is to deploy a website with some conditions as follows. The website can be accessed through HTTPS. If the request is HTTP, it will be redirected to HTTPS. There are 2 domains, web1.yourdomain.com and web2.yourdomain.com . But, users should be redirected to "web2" if they are visiting "web1". There are 4 main modules that we need to set up the environment. Private network. It allows the load balancer to connect with the server and pass the traffic. Server. It is used to host the website. Load balancer. It includes backend and frontend configuration. Dynamic certificate. It is requ...

Manage Kubernetes Cluster using Rancher

Recently, I sought a simpler method to deploy and maintain Kubernetes clusters across various cloud providers. The goal was to use it for development purposes with the ability to manage the infrastructure and costs effortlessly. After exploring several options, I decided to experiment with Rancher. Rancher offers a comprehensive software stack for teams implementing container technology. It tackles both the operational and security hurdles associated with managing numerous Kubernetes clusters. Additionally, it equips DevOps teams with integrated tools essential for managing containerized workloads. Rancher also offers an open-source version, allowing free deployment within one's infrastructure. The Rancher platform can be deployed either as a Docker container or within a Kubernetes cluster utilizing the K3s engine. We can read the documentation on how to install Rancher on K3s using Helm . Rancher itself enables the creation and provisioning of Kubernetes clusters and ...

Erwin Smith's Last Roar

The last moments of Commander Erwin.

Running CI/CD Pipeline with GitLab CI

GitLab allows us to deploy CI/CD pipeline runners on our own resources within our environment. This option is available not only for the self-hosted plan but also for the cloud service plan (gitlab.com). With this setup, unlike GitHub Action, we can avoid incurring additional costs for extended pipeline runtime. This is because we can deploy the runner on an on-demand server and optimize its usage. GitLab CI offers several options for setting up resources to run CI/CD pipelines. A runner can be configured to handle jobs for specific groups or projects using designated tags. It can also be set to use different executors, such as Shell, Docker, Kubernetes, or VirtualBox. A comparison table of the supported executors is available in the executor documentation . Some executors offer greater flexibility and ease of use, while others may be more rigid but enhance server security. Installing the runner in our machine For example, we will deploy the runner on an Ubuntu serve...

API Gateway Using KrakenD

The increasing demands of users for high-quality web services create the need to integrate various technologies into our application. This will cause the code base to grow larger, making maintenance more difficult over time. A microservices approach offers a solution, where the application is built by combining multiple smaller services, each with a distinct function. For example, one service handles authentication, another manages business functions, another maintains file uploads, and so on. These services communicate and integrate through a common channel. On the client side, users don't need to understand how the application is built or how it functions internally. They simply send a request to a single endpoint, and processes like authentication, caching, or database querying happen seamlessly. This is where an API gateway is effective. It handles user requests and directs them to the appropriate handler. There are several tools available for building an API gateway, su...

Headless CMS for Building API Endpoints

Recently, someone introduced me to a tool called Directus, which is used for building backend systems and defining data structures. This also reminded me of a product I reviewed about two years ago, which was also built using Directus. After doing some online research, I decided to test another headless CMS solution called Strapi. When we think of CMS platforms, names like WordPress, Joomla, or Moodle might come to mind. But what does 'headless' mean in this context? A headless CMS is a type of CMS that focuses solely on the backend system, without providing the frontend interface. It can generate the API endpoints, letting developers build the frontend or client-side application separately. However, this doesn't mean headless CMS platforms lack a user interface entirely. Most, like Directus and Strapi, include a UI tool (an administrator dashboard) for designing and managing the backend system and resources. After spending some time testing and reviewing Stra...

Bit By Bit, You're Charming My Heart

Deliver SaaS According Twelve-Factor App

If you haven't heard of  the twelve-factor app , it gives us a recommendation or a methodology for developing SaaS or web apps structured into twelve items. The recommendation has some connections with microservice architecture and cloud-native environments which become more popular today. We can learn the details on its website . In this post, we will do a quick review of the twelve points. One Codebase Multiple Deployment We should maintain only one codebase for our application even though the application may be deployed into multiple environments like development, staging, and production. Having multiple codebases will lead to any kinds of complicated issues. Explicitly State Dependencies All the dependencies for running our application should be stated in the project itself. Many programming languages have a kind of file that maintains a list of the dependencies like package.json in Node.js. We should also be aware of the dependencies related to the pla...

How To Use Protocol Buffer in Javascript

We have understood a few advantages of protocol buffer like what I've explained in my other post . Now, let's look at how we can implement it in our code. The "transpiler" tool, named protoc , supports the generation of a helper class for managing the object instance in a variety of programming languages. In this post, we use Javascript as an example and run in a Linux environment. Preparation Before we develop our code, we should install protoc for generating the helper class. Download protoc binary from the release page . Extract the content and store the directories  ( bin  and  includes ) in /usr/local  directory so that the executable binary can be accessed directly. Run protoc --help to check its manual. Install a required dependency globally to enable protoc  to generate the Javascript files by running: npm i -g protoc-gen-js . Create a proto file First, we should create an ...

Cycle of Hatred

We realized what was inside his mind when Pein talked to Naruto.

How To Measure Modularity

A module is a set of parts that can be used to build a more complex system. How parts can be set or grouped together is based on some considerations. How optimised our module or how good the modularity level of our system is, are our questions. Several aspects are very common when we want to measure the modularity of our system or software: cohesion , coupling , and connascence . Cohesion It is the indicator of whether we efficiently group some parts together. A cohesive module means all parts in the module are well coupled. If we break a cohesive module in our code into pieces or smaller modules, that will lead to an increase in coupling across modules and a decrease in the readability of the code. There are a few types of cohesion based on the cause of cohesiveness such as functional, sequential (input-output relation), procedural (execution order), logical, or temporal. One that is not strongly related to the functional aspect is logical cohesion. For example, we may...

Advantages of Using Protocol Buffer

A protocol buffer is a mechanism to share objects between machines which is language agnostic and has a target to reduce the payload size. We are already common with JSON which is used by most RESTful APIs to send/receive objects to/from any kind of client. JSON is already convenient and supported by many platforms, but, why we should know about the protocol buffer. Besides the optimization of payload encoding, protocol buffer which is also called  protobuf introduces schema definition that should be maintained by the machines to encode or decode the objects delivered. The main processes for delivering the objects are called serialization and deserialization. Serialization is the process of transforming an object instance in an application into an optimized binary payload. Deserialization is the process of decoding the binary data into the desired object. Let's take a look at the following table that shows a comparison of XML, JSON, and protobuf. ...