Loading

Aegis Blog

Automating Server Deployment on Linode.com Using Terraform

In today’s fast-paced world of technology, automation is a key driver of efficiency and reliability in IT operations. When it comes to deploying and managing servers, automation tools like Terraform can be a game-changer. Linode, a popular cloud hosting provider, offers a robust platform for hosting virtual machines, and with Terraform, you can automate the provisioning and management of Linode servers effortlessly. In this article, we’ll explore how to automate server deployment on Linode.com using Terraform.

Why Automate with Terraform and Linode?

Before diving into the technical details, let’s understand why Terraform and Linode are a powerful combination for automating server deployment.

1. Infrastructure as Code (IaC): Terraform allows you to define your infrastructure as code, which means you can version-control your infrastructure configuration, making it easy to track changes, collaborate with team members, and reproduce environments.

2. Multi-Cloud Support: While we’re focusing on Linode in this article, Terraform supports multiple cloud providers, making it versatile for managing infrastructure across different platforms.

3. Consistency and Reproducibility: Terraform ensures that your server deployments are consistent and reproducible. You define the desired state of your infrastructure, and Terraform takes care of making it a reality.

4. Scalability: As your infrastructure needs grow, Terraform can easily scale with you. Adding new Linode servers or modifying existing ones can be achieved with minimal effort.

Prerequisites

Before we get started, you’ll need to have the following prerequisites in place:

  1. Terraform Installed: Download and install Terraform from the official website (https://www.terraform.io/downloads.html).
  2. Linode Account: Sign up for a Linode account (https://www.linode.com/) if you don’t have one already.

Automating Server Deployment

Let’s walk through the steps to automate server deployment on Linode.com using Terraform:

Step 1: Configure Linode Provider

First, you need to configure the Linode provider in your Terraform configuration file (main.tf).

terraform {
  required_version = ">= 0.15"
  required_providers {
    linode = {
      source = "linode/linode"
          }
  }
}
provider "linode" {
  token = var.linode_api_token
}

Step 2: Define Server Resources

Next, define the server resources you want to create. For this, create a new file called ‘server.tf’ Here’s an example of creating a Linode instance:

resource "linode_instance" "example_server" {
  label = "example-server"
  type  = "g6-standard-2"
  region = "us-east"
  image  = "linode/debian10"
}

You can customize the label, Linode type, region, and image to match your requirements.

Step 3: Create your variables files

Next, create the following files for your variables:
variables.tf:

variable "linode_api_token" {
  sensitive = true
}


variables.tfvars:

linode_api_token = "linode-api-token"

After defining your resources, navigate to the directory containing your Terraform configuration file and run the following commands:

Step 4: Initialize and Apply

terraform init
terraform plan
terraform apply

Terraform will initialize the project and show you a plan of what it intends to do. If everything looks good, confirm by typing yes.

Step 4: Verify Deployment

Once Terraform completes the deployment, it will provide you with information about the resources created, including the Linode server’s IP address.

Step 5: Manage Your Infrastructure

With Terraform, managing your Linode server is a breeze. You can make changes to your infrastructure by updating your Terraform configuration and running terraform apply again. Terraform will automatically determine what changes need to be made to achieve the desired state.

Conclusion

Automating server deployment on Linode.com using Terraform empowers you to manage your infrastructure efficiently, consistently, and at scale. This article covered the basics of setting up Terraform with Linode and creating a Linode instance, but the possibilities are endless. You can extend your configuration to include networking, storage, and more.

By embracing infrastructure as code and automation, you’ll not only save time and effort but also reduce the risk of human errors in your server deployment process. Explore further, experiment, and unlock the full potential of Linode and Terraform for your infrastructure needs. Happy automating!

How to Set up WireGuard VPN on Ubuntu 22.04

Introduction:

In today’s digital age, where online privacy and data security have become paramount concerns, Virtual Private Networks (VPNs) have emerged as indispensable tools for safeguarding our internet activities. Among the wide array of VPN solutions available, WireGuard has garnered significant attention and popularity for its groundbreaking simplicity, lightning-fast performance, and robust cryptographic design.

If you’re looking to enhance your online privacy and protect sensitive information from prying eyes, WireGuard offers an innovative approach that sets it apart from traditional VPN protocols. In this step-by-step tutorial, we will guide you through the seamless process of installing and configuring WireGuard on your Ubuntu 22.04 server. Additionally, we’ll demonstrate how to create new users, each with their dedicated keys, enabling secure and anonymous connections.

Whether you’re a seasoned sysadmin or a VPN enthusiast, this comprehensive guide will equip you with the knowledge and confidence to deploy WireGuard effortlessly and take full advantage of its cutting-edge features.

If you need a VPS, you can find one here:

Part 1: Install WireGuard

To begin, we’ll install the WireGuard package on your Ubuntu 22.04 server.

Step 1:
Update the package list and install WireGuard:

sudo apt update
sudo apt install wireguard

Step 2:
Verify that the WireGuard module is loaded:

sudo modprobe wireguard

Step 3:
Ensure the module loads at boot time:

sudo echo "wireguard" >> /etc/modules

Part 2: Generate Public and Private Keys

Now, let’s generate the private and public keys needed for WireGuard.

Step 1:
Generate the server’s private and public keys:

wg genkey | sudo tee /etc/wireguard/privatekey | wg pubkey | sudo tee /etc/wireguard/publickey

Step 2:
Secure the private key file:

sudo chmod 600 /etc/wireguard/privatekey

Part 3: Configure WireGuard

With the keys generated, we’ll proceed to configure the WireGuard interface.

Step 1:
Create a new configuration file for the WireGuard interface (e.g., wg0):

sudo nano /etc/wireguard/wg0.conf

Step 2:
Add the following configuration to wg0.conf, customizing the address range and port as needed:

[Interface]
Address = 10.0.0.1/24      # Replace with your desired IP range
PrivateKey = <SERVER_PRIVATE_KEY>
ListenPort = 51820        # Change this port if needed


# Uncomment the following if using a cloud server or behind NAT
# PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o ensX -j MASQUERADE
# PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o ensX -j MASQUERADE

Replace <SERVER_PRIVATE_KEY> with the content of /etc/wireguard/privatekey generated earlier. If necessary, uncomment the PostUp and PostDown lines and replace ensX with your external network interface.

Step 3:
Save and exit the wg0.conf file.

Part 4: Start the WireGuard Service

It’s time to start the WireGuard service.

Step 1:
Initiate the WireGuard service:

sudo systemctl start wg-quick@wg0

Step 2:
Enable WireGuard to start on boot:

sudo systemctl enable wg-quick@wg0

Part 5: Create New Users

Now, let’s create new users and generate their private and public keys.

Step 1:
Generate private and public keys for each user:

wg genkey | sudo tee /etc/wireguard/client_privatekey1 | wg pubkey | sudo tee /etc/wireguard/client_publickey1
wg genkey | sudo tee /etc/wireguard/client_privatekey2 | wg pubkey | sudo tee /etc/wireguard/client_publickey2
# Generate keys for more users as needed

Step 2:
Create a new configuration file for each user. Replace user1 and user2 with your desired usernames:

sudo nano /etc/wireguard/user1.conf

Step 3:
Add the following content to each user’s configuration file, replacing the placeholders with the corresponding keys and server’s public IP address:

[Interface]
PrivateKey = <CLIENT_PRIVATE_KEY>
Address = 10.0.0.2/32                # Assign a unique IP address for each user
DNS = 8.8.8.8                        # Optional: Set preferred DNS server

[Peer]
PublicKey = <SERVER_PUBLIC_KEY>
AllowedIPs = 0.0.0.0/0
Endpoint = YOUR_SERVER_PUBLIC_IP:51820

Replace <CLIENT_PRIVATE_KEY> with the content of the corresponding client private key and <SERVER_PUBLIC_KEY> with the content of /etc/wireguard/publickey generated earlier. Modify the IP address and DNS settings as desired.

Step 4: Save and close each user’s configuration file.
This file gets used with the WireGuard client to connect to your VPN.

IMPORTANT

You also need to add each user to the wg0.conf file under all the settings:

#User1 
[Peer]
PublicKey = <USER_PUBLIC_KEY> 
AllowedIPs = 10.0.0.19/32

Save after adding each user, and restart the WireGuard service.

Part 6: Restart the WireGuard Service

Whenever you make changes to the WireGuard configuration files, you need to restart the service for the changes to take effect.

sudo systemctl restart wg-quick@wg0

Now that your WireGuard VPN is up and running, you may want to connect to it from various devices. Thankfully, WireGuard has clients available for multiple platforms, making it convenient to access your VPN from desktops, laptops, smartphones, and even routers. Below are the links to download the WireGuard clients for some popular platforms:

  1. Windows: Download WireGuard for Windows
  2. macOS: Download WireGuard for macOS
  3. Linux: Refer to your distribution’s package manager or visit WireGuard installation guide for specific instructions.
  4. Android: Download WireGuard for Android
  5. iOS: Download WireGuard for iOS

With these clients, you can easily establish secure connections to your WireGuard VPN and protect your data across various devices and networks. Remember to import the respective client configuration files generated earlier to initiate connections seamlessly.

Virtual Private Networks (VPNs) have become an essential tool for securing your online connections and maintaining privacy. By harnessing the power of WireGuard, you have embraced a contemporary VPN solution that surpasses conventional protocols in terms of speed, efficiency, and security. Whether you are an individual seeking to safeguard personal information or a business safeguarding critical data, WireGuard’s streamlined architecture and cryptographic excellence have you covered. Embrace the future of VPN technology and enjoy safe, encrypted browsing with WireGuard!

Setting up a CI/CD Pipeline: Deployment stage.

Introduction:

As we approach the final stage of our exploration into the world of CI/CD pipelines, we dive into the fourth and crucial phase: The Deployment Stage. This stage marks the ultimate goal of continuous integration and deployment – delivering tested and validated code to production or staging environments automatically. The Deployment stage is the culmination of the entire CI/CD pipeline, ensuring that the application’s latest version is safely and efficiently deployed, meeting end-user needs promptly and reliably.

Deployment Stage: Accelerating Software Delivery to Production

The Deployment stage is the crown jewel of the CI/CD pipeline, where the fully tested and validated code transforms into a tangible product, ready to serve its purpose. Automated deployment brings numerous benefits to the software development process:

  1. Continuous Delivery: Automated deployment enables continuous delivery, wherein every code change that passes the Test stage is automatically deployed to production. This process ensures that the application is always up-to-date with the latest features and bug fixes, keeping pace with user expectations.
  2. Faster Time-to-Market: The Deployment stage eliminates manual intervention in the release process, drastically reducing deployment time. This accelerated delivery allows organizations to seize market opportunities quickly, staying ahead of the competition.
  3. Reduced Downtime: Automated deployment workflows ensure consistency across environments, minimizing the risk of configuration errors and reducing the chance of system downtime during the release process.
  4. Rollback Capability: In the event of unexpected issues post-deployment, automated deployment enables a seamless rollback to a previously stable version. This safety net enhances reliability and minimizes the impact of potential failures.
  5. Environment Independence: The Deployment stage facilitates deploying applications to various environments, such as production, staging, testing, or development, without modification. This versatility simplifies the testing and development process.

Key Components of the Deployment Stage:

  1. Deployment Scripts: Deployment scripts are essential tools used to automate the deployment process. These scripts, written in languages like Bash, PowerShell, or Python, define the steps needed to deploy the application to the target environment.
  2. Configuration Management: Configuration management tools, such as Ansible, Puppet, or Chef, play a crucial role in maintaining consistent configurations across different environments. These tools ensure that the application’s environment is correctly set up, reducing deployment-related issues.
  3. Orchestration Tools: Orchestration tools, such as Kubernetes or Docker Swarm, are essential for deploying containerized applications. These tools manage the container lifecycle, scaling, and load balancing, making it easier to deploy and manage applications in containerized environments.
  4. Canary Deployments: Canary deployments are a deployment strategy that allows a new version of the application to be rolled out gradually to a subset of users or servers. This strategy helps to assess the new version’s performance and stability before full deployment.
  5. Blue-Green Deployments: Blue-Green deployments involve maintaining two identical environments – the “blue” environment running the current version and the “green” environment running the new version. Traffic is switched between the two environments during deployment, providing a smooth transition and minimizing downtime.

Deployment Workflow:

  1. Triggering Deployment: The deployment process is usually triggered automatically after a successful Test stage, where all the tests have passed, and the application is deemed production-ready.
  2. Versioning and Artifact Retrieval: The deployment scripts retrieve the appropriate version of the application artifact from the artifact repository, ensuring that the correct and tested code is deployed.
  3. Environment Setup: The deployment scripts configure the target environment with all the necessary dependencies, settings, and configurations, ensuring consistency across different environments.
  4. Deployment and Verification: The application is deployed to the target environment, and post-deployment tests are conducted to verify that the application is functioning as expected. These tests may include smoke tests, basic functionality tests, and health checks.
  5. Traffic Switching: In more advanced deployment strategies like Canary or Blue-Green deployments, traffic is gradually or instantly switched between the old and new versions to ensure a seamless transition for users.
  6. Monitoring and Rollback: After deployment, monitoring tools track the application’s performance and stability in the production environment. In case of any issues or anomalies, an automated rollback to the previous stable version can be triggered to maintain service availability.

Conclusion:

The Deployment stage marks the grand finale of the CI/CD pipeline, culminating in the successful delivery of fully tested and validated code to production or staging environments. By automating deployment processes, organizations benefit from continuous delivery, reduced downtime, and faster time-to-market. Embracing deployment scripts, configuration management, orchestration tools, and advanced deployment strategies like Canary and Blue-Green, development teams can confidently deliver applications with enhanced reliability and quality.

In this article series, we have journeyed through the four key stages of a CI/CD pipeline: Source, Build, Test, and Deployment. Each stage plays a vital role in streamlining the software development lifecycle, promoting collaboration, and ensuring the rapid delivery of high-quality applications. By harnessing the power of automation and incorporating best practices, organizations can drive innovation, maintain competitiveness, and meet the evolving needs of their users with unmatched speed and efficiency. Embrace the transformative power of CI/CD, and embark on a journey of continuous improvement in software delivery.

Setting up a CI/CD Pipeline: Testing stage

Introduction:

In our exploration of the essential components that constitute a well-optimized CI/CD pipeline, we arrive at the third stage: The Test Stage. As the gatekeeper of software quality, the Test stage is responsible for ensuring that the application is thoroughly evaluated, free from critical bugs, and capable of meeting user expectations. Automated testing forms the cornerstone of this stage, delivering rapid feedback and enabling developers to iteratively enhance the codebase while maintaining high standards of reliability and functionality.

Test Stage: Ensuring Software Reliability and Quality

The Test stage in the CI/CD pipeline is a pivotal phase where the application’s functionality, security, and performance are subjected to automated evaluations. This stage follows the Build phase, where the application code is compiled and packaged into deployable artifacts. Automating tests significantly benefits the development process, offering several advantages:

  1. Faster Feedback Loop: Automated testing provides rapid feedback on code changes. Developers can identify and address defects early in the development cycle, reducing debugging efforts and improving overall productivity.
  2. Increased Test Coverage: Automated tests can cover a broad spectrum of scenarios, including unit tests, integration tests, end-to-end tests, and security tests. This comprehensive approach ensures that the application’s various aspects are thoroughly vetted for quality.
  3. Consistent Results: Automated tests deliver consistent and repeatable results, regardless of the tester or testing environment. This consistency enhances confidence in the application’s reliability.
  4. Continuous Validation: As code changes are integrated and deployed regularly, the Test stage continuously validates the application’s integrity, ensuring that new features and bug fixes do not introduce unintended side effects.

Key Components of the Test Stage:

  1. Test Frameworks: Test frameworks are the backbone of automated testing in the CI/CD pipeline. These frameworks, such as JUnit, NUnit, Selenium, or Cypress, provide the necessary tools and libraries to define, organize, and execute tests efficiently.
  2. Unit Testing: Unit tests evaluate individual components or units of code in isolation. These tests verify that each unit behaves as expected, independently of its interactions with other components.
  3. Integration Testing: Integration tests assess how multiple units or components interact with one another. This type of testing ensures that the integrated system functions correctly as a whole.
  4. End-to-End Testing: End-to-End (E2E) tests examine the application’s behavior from the end-user’s perspective, covering multiple user interactions across the entire system. E2E tests validate the application’s functionality and user experience.
  5. Security Testing: Security testing aims to identify vulnerabilities and potential security threats within the application. Techniques like static code analysis, dynamic application security testing (DAST), and penetration testing are utilized to assess security aspects comprehensively.
  6. Performance Testing: Performance testing evaluates the application’s responsiveness and scalability under various conditions, simulating high user loads and stress scenarios. This ensures the application can handle real-world usage without performance bottlenecks.

Test Automation Workflow:

  1. Test Trigger: Automated tests are typically triggered after a successful build in the CI/CD pipeline. The continuous integration platform, such as Jenkins or GitLab CI, initiates the test suite execution, with results recorded for further analysis.
  2. Test Execution: The test suite contains various types of automated tests, including unit tests, integration tests, E2E tests, security tests, and performance tests. These tests run in isolation or in predefined sequences, examining different aspects of the application.
  3. Test Reporting: After test execution, the testing framework generates detailed reports containing information about passed and failed tests, test coverage, and any potential issues identified.
  4. Result Analysis: Developers and QA teams analyze the test results to identify failing tests or performance bottlenecks. Failed tests indicate potential defects that need immediate attention.
  5. Feedback Loop: In case of test failures, developers receive instant feedback. They address the issues promptly, updating the codebase and triggering another round of automated testing.

Continuous Testing and Shift-Left Approach:

Continuous Testing, a critical aspect of the Test stage, focuses on integrating testing into every stage of the development process. By adopting a Shift-Left approach, testing starts at the earliest stages of the development cycle. Developers conduct unit tests and integration tests locally on their development environments before committing changes to the version-controlled repository. These automated tests quickly validate code changes, reducing the likelihood of defects progressing to later stages of the CI/CD pipeline.

Conclusion:

The Test stage in the CI/CD pipeline plays a crucial role in ensuring software reliability and quality. Automated testing allows for faster feedback, increased test coverage, and consistent validation of the application. By employing various types of tests, such as unit tests, integration tests, E2E tests, security tests, and performance tests, development teams can confidently deliver robust and high-quality applications to users.

In the next part of our article series, we will explore the Deployment stage, where the fully tested and validated application is automatically deployed to production or staging environments. Stay tuned as we continue our journey through the CI/CD pipeline, unlocking valuable insights to accelerate software delivery while maintaining the highest standards of quality and reliability.

Setting up a CI/CD Pipeline: Build Stage


Build Stage – Automating Code Compilation and Packaging

Introduction:

In our quest to unravel the secrets of a well-optimized CI/CD pipeline, we embark on the second part of our article series: The Build Stage. As the heartbeat of the continuous integration process, this stage focuses on automating code compilation and packaging, laying the groundwork for reliable and efficient software delivery.

Build Stage: Automating Code Compilation and Packaging

Once the source code is managed in a version-controlled Git repository, the Build stage takes center stage. This critical phase aims to transform raw source code into executable artifacts, ensuring that the application is in a deployable state. Automating this process is essential for several reasons:

  1. Early Error Detection: Automated builds allow developers to detect compilation errors, syntax issues, or missing dependencies at an early stage. This saves precious time and prevents faulty code from progressing further down the pipeline.
  2. Consistent Builds: Manual builds run the risk of inconsistencies, making it challenging to reproduce specific versions. Automation guarantees uniformity across all environments, reducing deployment-related surprises.
  3. Faster Delivery: Automated builds enable quicker turnaround times, speeding up the entire development cycle. Developers can focus on coding while the CI/CD pipeline takes care of the building process.
  4. Build Artifacts: The output of the build process is a deployable artifact—a binary, package, or container image—that serves as the foundation for subsequent testing and deployment stages.

Jenkins is a popular tool in CI/CD

Key Components of the Build Stage:

  1. Build Tools: Build tools are essential components of the Build stage that facilitate code compilation and packaging. These tools, such as Maven, Gradle, or Make, define the build process and manage dependencies efficiently. Developers can specify the build steps, such as compiling source code, running unit tests, and packaging the application, in a build configuration file.
  2. Build Configuration: A build configuration file, such as the popular pom.xml for Maven or build.gradle for Gradle, plays a crucial role in orchestrating the build process. This file contains information about the project’s dependencies, build steps, and other critical parameters. CI/CD platforms use this configuration to initiate the automated build process whenever code changes are pushed to the source repository.
  3. CI/CD Integration: Continuous integration platforms, like Jenkins, GitLab CI, CircleCI, or Travis CI, integrate seamlessly with the version-controlled Git repository. These platforms monitor the repository for code changes and automatically trigger builds when new commits are detected. This integration ensures that every code change undergoes the build process, fostering a culture of continuous integration.
  4. Build Caches: Build caches are instrumental in optimizing build times. As projects grow and dependencies remain unchanged, recompiling everything from scratch can be time-consuming. Build caches store compiled dependencies, intermediate build artifacts, and other reusable data. By leveraging cached information, subsequent builds can skip redundant compilation steps, resulting in faster and more efficient builds.
  5. Artifact Repositories: After successful builds, generated artifacts are stored in artifact repositories, such as Nexus or Artifactory. These repositories serve as a centralized hub for storing and managing deployable artifacts. Having a well-organized artifact repository ensures version management and easy access to reliable and deployable code, facilitating the testing and deployment stages.

Conclusion:

The Build stage plays a pivotal role in a robust CI/CD pipeline by automating code compilation and packaging. Automated builds ensure early error detection, consistent artifacts, and faster delivery, empowering development teams to deliver high-quality software with greater efficiency. By incorporating essential components such as build tools, configuration files, CI/CD integration, build caches, and artifact repositories, organizations can optimize their software delivery process and foster a culture of continuous integration and deployment. In the next part of our article series, we will explore the Test stage, where automated testing becomes the protagonist in ensuring the application’s reliability and functionality. Stay tuned as we continue our journey towards mastering the CI/CD pipeline.

Setting Up a CI/CD Pipeline: Source Stage

This four-part article series aims to demystify the CI/CD pipeline by delving into its four main stages: Source, Build, Test, and Deployment. Each stage plays a critical role in optimizing the software delivery process.

  1. Source: The Source stage establishes the foundation of the CI/CD pipeline by setting up a Git repository to centrally manage and track the application’s source code. With version control in place, developers can work collaboratively, manage changes efficiently, and ensure a reliable codebase.
  2. Build: The Build stage automates the process of compiling and assembling the source code into executable artifacts. By automating this step, developers can detect compilation errors early and ensure that the code is in a deployable state.
  3. Test: In the Test stage, automated testing takes center stage. Comprehensive test suites validate the application’s functionality, security, and performance, catching bugs before they reach production.
  4. Deployment: The final stage, Deployment, automates the release of the application to production or staging environments. This allows for consistent and reliable delivery, reducing the risk of manual errors and ensuring a seamless experience for end-users.

By understanding each stage’s significance, developers and DevOps teams can build efficient CI/CD pipelines that foster a culture of continuous improvement and innovation, enabling organizations to deliver high-quality software at an unprecedented pace. Stay tuned as we embark on this journey through each stage, uncovering valuable insights to elevate your software delivery process.

Stage 1: Source

The Source stage serves as the foundation of the CI/CD pipeline, providing a centralized location for storing and managing the source code of your application. Git, a widely adopted distributed version control system, plays a pivotal role in this stage. Setting up a Git repository is the initial step to enable seamless collaboration among developers and streamline the code integration process.

Step 1: Git Installation

Before creating a Git repository, ensure that you have Git installed on your development machine. Git is compatible with Windows, macOS, and Linux, and you can download the installer for your operating system from the official website (https://git-scm.com/downloads). Once installed, you can open a terminal or command prompt and verify the installation by running the following command:

git --version

Step 2: Initializing a Local Repository

Once Git is installed, navigate to your project’s root directory using the terminal or command prompt. To initialize a new Git repository, execute the following command:

git init

This command creates a hidden .git directory within your project, where Git stores all the necessary data and configuration for version control.

Step 3: Staging and Committing Changes

After initializing the repository, you can begin tracking changes to your project’s files. Use the following commands to add files to the staging area and commit them to the repository:

# Add specific files to the staging area
git add <file1> <file2>

# Add all changed files to the staging area
git add .

# Commit the staged changes with a descriptive message
git commit -m "Your commit message here"

Step 4: Configuring Remote Repository

For collaborative development and centralized source code management, it’s essential to connect your local Git repository to a remote repository hosting service like GitHub, GitLab, or Bitbucket. Create a new empty repository on your preferred hosting service and follow their instructions to add a remote origin to your local repository:

git remote add origin <remote-repository-URL>

Step 5: Pushing Changes to the Remote Repository

To synchronize your local repository with the remote one, use the git push command:

git push -u origin master

This command pushes your local master branch to the remote repository. Subsequent pushes can be made using just git push.

Conclusion

Setting up a Git repository is the foundation for establishing an efficient CI/CD pipeline. By creating a centralized source control system, developers can collaborate seamlessly, track changes, and manage versions effectively. In the next article of this series, we will explore the Build stage, where we’ll delve into the process of automating the compilation of your code. Stay tuned for more insights on building a robust CI/CD pipeline that enables rapid and reliable application delivery.

How to Add S3 Storage to Your Linux Server

Introduction

Expanding the storage capacity of your Linux server can be crucial when dealing with large amounts of data or running resource-intensive applications. Amazon S3 (Simple Storage Service) is a popular cloud storage solution, but it might not be the most cost-effective option for everyone. In this article, we’ll guide you through the process of adding S3 storage from Wasabi.com to your Linux server. Wasabi.com is known for its competitive pricing and compatibility with the S3 API, making it an excellent alternative for cloud storage.

Prerequisites

  1. A Linux server with root or sudo access.
  2. A Wasabi.com account with an S3 bucket created.
In this article, we'll guide you through the process of adding S3 storage from Wasabi.com to your Linux server.

Step 1: Install S3FS-FUSE

To mount Wasabi S3 storage on your Linux server, we’ll use S3FS-FUSE, a tool that allows us to mount S3 buckets as a local file system. We’ll first install S3FS-FUSE:

Update your system's package list:
sudo apt update

Install the required dependencies:

sudo apt install automake autotools-dev g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config

Clone the S3FS repository from GitHub:

git clone https://github.com/s3fs-fuse/s3fs-fuse.git

Change into the cloned directory:

cd s3fs-fuse

Build and install S3FS-FUSE:

./autogen.sh./configure make sudo make install

Step 2: Configure Wasabi S3 Access

Before mounting the Wasabi S3 bucket, you need to retrieve your Wasabi Access Key ID and Secret Access Key:

  1. Log in to your Wasabi.com account.
  2. In the Wasabi Management Console, navigate to the “Users” section and select your user account.
  3. Go to the “Keys” tab, and click on “Create New Access Key” to generate a new access key pair.
  4. Save the Access Key ID and Secret Access Key in a secure location.

Step 3: Create a Mount Point

Now that you have S3FS-FUSE installed and your Wasabi S3 credentials ready, you need to create a directory that will serve as the mount point for the S3 bucket.

Choose a directory for the mount point, for example:

sudo mkdir /mnt/wasabi-s3

Adjust the directory permissions:

sudo chown <your_username>:<your_group> /mnt/wasabi-s3

Replace <your_username> and <your_group> with your actual Linux username and primary group.

Step 4: Mount the Wasabi S3 Bucket

With S3FS-FUSE installed, Wasabi S3 credentials ready, and a mount point available, you can now mount the S3 bucket to your Linux server:

s3fs <bucket_name> /mnt/wasabi-s3 -o passwd_file=/path/to/credentials_s3fs -o url=https://s3.wasabysys.com -o umask=022

Replace <bucket_name> with the name of your Wasabi S3 bucket. For example, if your bucket’s name is “my-linux-backups,” the command would be:

s3fs my-linux-backups /mnt/wasabi-s3 -o passwd_file=/path/to/credentials_s3fs -o url=https://s3.wasabisys.com -o umask=022

Step 5: Test the Mount

To ensure the Wasabi S3 bucket is successfully mounted, list the contents of the mount point:

ls /mnt/wasabi-s3

If everything is set up correctly, you should see the contents of your Wasabi S3 bucket.

Step 6: Mount at Boot (Optional)

To automatically mount the Wasabi S3 bucket each time your Linux server boots, add an entry to the /etc/fstab file:

Open the /etc/fstab file in a text editor:

sudo nano /etc/fstab

Add the following line to the end of the file:j

s3fs#<bucket_name> /mnt/wasabi-s3 fuse _netdev,passwd_file=/path/to/credentials_s3fs,url=https://s3.wasabisys.com,umask=022 0 0 

Replace <bucket_name> and /path/to/credentials_s3fs with the appropriate values.

Save the file and exit the text editor.

Conclusion

By following these steps, you have successfully added Wasabi S3 storage to your Linux server. This allows you to expand your server’s storage capacity and take advantage of the cost-effective Wasabi.com S3 storage solution. Remember to ensure that your S3 bucket is appropriately secured, especially if it contains sensitive data. With your Wasabi S3 bucket mounted, you can now easily manage and access your data on the cloud storage directly from your Linux server.

5 Essential Tips to Fortify Your WordPress Site

Introduction:

WordPress is a popular and versatile content management system (CMS) that powers millions of websites worldwide. However, its popularity also makes it a prime target for hackers and cybercriminals. To ensure your WordPress site stays safe and secure, it’s crucial to implement robust security measures. In this blog post, we will discuss five essential tips to fortify your WordPress site and protect it from potential threats.

Protect your WordPress site from getting hacked by securing it.
  1. Keep WordPress Core, Themes, and Plugins Updated:

One of the most critical aspects of securing your WordPress site is ensuring that you consistently update the WordPress core, themes, and plugins. Developers regularly release updates to patch vulnerabilities and improve security. Failure to apply these updates promptly can leave your site susceptible to attacks.

To stay up-to-date:

a) Enable Automatic Updates: WordPress offers an option to enable automatic updates for minor releases, plugins, and themes. While enabling this feature is a proactive step, it’s essential to regularly check for any conflicts or compatibility issues after updates.

b) Monitor Plugin and Theme Developers: Choose reputable plugin and theme developers who actively maintain their products. This ensures you receive timely updates and support when necessary.

  1. Utilize Strong Login Credentials:

The foundation of your WordPress site’s security begins with robust login credentials. Weak passwords are a common entry point for attackers. Follow these best practices:

a) Unique and Complex Passwords: Use a combination of uppercase and lowercase letters, numbers, and special characters. Avoid using easily guessable information like birthdays or common phrases.

b) Limit Login Attempts: Install plugins that limit the number of login attempts. This discourages brute-force attacks and enhances your site’s security.

c) Two-Factor Authentication (2FA): Implement 2FA, which adds an extra layer of security by requiring users to provide a one-time code or use a verification app on their mobile device during login.

  1. Secure Your wp-config.php File:

The wp-config.php file contains sensitive information, including database credentials and security keys. Protecting this file is crucial to prevent unauthorized access.

a) Move wp-config.php: Relocate the wp-config.php file outside the web-accessible directory. This prevents attackers from directly accessing it through the browser.

b) Set File Permissions: Configure appropriate file permissions for wp-config.php (chmod 400). This ensures that only the owner can read and write to the file, preventing unauthorized access.

  1. Implement Regular Backups:

Backups are your safety net in case of a security breach or any other catastrophic event. Regularly backing up your WordPress site allows you to restore it to a clean state quickly.

a) Automated Backup Solutions: Utilize reliable backup plugins that offer automated, scheduled backups. This saves you time and ensures your backups are up-to-date.

b) Offsite Storage: Store your backups on a remote server or cloud storage to safeguard them from potential server failures or security breaches on your hosting server.

  1. Install Security Plugins:

Security plugins provide an extra layer of protection and help you monitor and mitigate potential threats.

a) Firewall Protection: Use a Web Application Firewall (WAF) to filter incoming traffic and block malicious requests before they reach your site.

b) Malware Scanning: Employ security plugins that regularly scan your site for malware, suspicious files, and vulnerabilities.

c) Login Lockdown: As mentioned earlier, use plugins that enforce login lockdowns to prevent brute-force attacks.

Conclusion:

Securing your WordPress site is an ongoing process that requires consistent attention and effort. By following these five essential tips – keeping your WordPress core, themes, and plugins updated, using strong login credentials, securing the wp-config.php file, implementing regular backups, and installing security plugins – you significantly improve the resilience of your site against potential threats. Remember that staying informed about the latest security practices and emerging threats is equally important in maintaining a secure WordPress website.

If you find yourself overwhelmed with the complexities of WordPress security or wish to enhance your site’s protection with expert assistance, Aegis DevOps can secure your WordPress site. As a specialized DevOps team, we have extensive experience in securing WordPress sites from various threats. Our services encompass comprehensive security audits, where we identify vulnerabilities and implement tailored solutions to fortify your website. Additionally, we can manage regular updates, backups, and monitoring to ensure your WordPress site remains safe and up-to-date. With Aegis DevOps as your partner, you can focus on your core business while entrusting the security of your WordPress site to seasoned professionals. Contact us today to strengthen your WordPress site’s security and safeguard it against potential risks.

Install Jenkins on Ubuntu

Jenkins is the leading open source automation server, with hundreds of plugins to support building, deploying and automating any project. It allows you to help automate the parts of software development related to building, testing and deploying. This facilitates CI/CD or Continuous Integration / Continuous Delivery. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.

Jenkins achieves Continuous Integration with the help of plugins that allow the integration of Various DevOps stages. Let’s say that you want to integrate a particular tool: all you need to do is install the plugins for that tool. Example: Git, Maven 2 project, Amazon EC2, HTML publisher etc. There are over 1000 plugins for Jenkins and it has over 147 000 active installations and over 1 000 000 users.

Installing Jenkins – system requirements

Minimum hardware requirements:

256 MB of RAM

1 GB of drive space (although 10 GB is a recommended minimum if running Jenkins as a Docker container)

Now – since we are not living in 2002 anymore, we can safely post the suggested minimum requirements:

4 GB+ of RAM

50 GB+ of drive space

This is the recommended for a small team. You can of course adjust to your needs but we do suggest you have a dedicated server or dedicated VPS for running Jenkins. For more information on the hardware requirements to scale Jenkins, you can go to this page for more information.

Installing Jenkins – software requirements

Software requirements:

Java: see the Java Requirements page

Web browser: see the Web Browser Compatibility page

For Windows operating system: Windows Support Policy

For Linux operating system: Linux Support Policy

For servlet containers: Servlet Container Support Policy

Installation

Java

To start our Jenkins installation, we first need to check which version of Java is installed on our server. Since Jenkins relies heavily on Java, we need to get this right from the start before continuing. Many Linux distributions (including Ubuntu) does not include Java by default, and some versions are not compatible with Jenkins. So determine which version of Java – if at all – is installed, we will use the java –version command:

As you can see we do not have any version of Java installed, so we will proceed with installing Openjdk-11-jre on this server.

$ sudo apt update
$ sudo apt install openjdk-11-jre

Jenkins

On Ubuntu, we can install Jenkins through apt. We can install either the LTS release or Weekly release. The LTS release is chosen every 12 weeks from the stream of regular releases as the stable release for that time period.

$ sudo curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee
/usr/share/keyrings/jenkins-keyring.asc > /dev/null
$ sudo echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]
https://pkg.jenkins.io/debian-stable binary/ | sudo tee
/etc/apt/sources.list.d/jenkins.list > /dev/null
$ sudo apt update
$ sudo install jenkins

The package installation will:

Setup Jenkins as a daemon launched on start. Run systemctl cat jenkins for more details.

Create a ‘jenkins’ user to run this service.

Direct console log output to systemd-journald. Run journalctl -u jenkins.service if you are troubleshooting Jenkins.

Populate /lib/systemd/system/jenkins.service with configuration parameters for the launch, e.g JENKINS_HOME

Set Jenkins to listen on port 8080. Access this port with your browser to start configuration.

If Jenkins fails to start because of a port conflict, you can run systemctl edit jenkins and add the following:

[Service]
Environment=”JENKINS_PORT=8081″
You can use any port you have open.

Now the final two commands you need to run:

$ sudo service jenkins start
$ sudo systemctl enable jenkins

This is it.
Now you can browse to your Jenkins installation on port :8080

Activate Jenkins and initial setup

When browsing to your Jenkins install, you will be met with the Unlock Jenkins screen:

Simply run this command to get your AdminPassword:

$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Once you have completed this step, you will see the following screen:

For now, you can just install the selected plugins. After this you will be required to create the Admin password for the login. And then you are set.
You can install further plugins as per your needs to enable all the required services you will use.

The post Install Jenkins on Ubuntu appeared first on Sleuth Technical.

Install Jenkins on Ubuntu

Jenkins is the leading open source automation server, with hundreds of plugins to support building, deploying and automating any project. It allows you to help automate the parts of software development related to building, testing and deploying. This facilitates CI/CD or Continuous Integration / Continuous Delivery. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.

Jenkins achieves Continuous Integration with the help of plugins that allow the integration of Various DevOps stages. Let’s say that you want to integrate a particular tool: all you need to do is install the plugins for that tool. Example: Git, Maven 2 project, Amazon EC2, HTML publisher etc. There are over 1000 plugins for Jenkins and it has over 147 000 active installations and over 1 000 000 users.

Installing Jenkins – system requirements

Minimum hardware requirements:

256 MB of RAM

1 GB of drive space (although 10 GB is a recommended minimum if running Jenkins as a Docker container)

Now – since we are not living in 2002 anymore, we can safely post the suggested minimum requirements:

4 GB+ of RAM

50 GB+ of drive space

This is the recommended for a small team. You can of course adjust to your needs but we do suggest you have a dedicated server or dedicated VPS for running Jenkins. For more information on the hardware requirements to scale Jenkins, you can go to this page for more information.

Installing Jenkins – software requirements

Software requirements:

Java: see the Java Requirements page

Web browser: see the Web Browser Compatibility page

For Windows operating system: Windows Support Policy

For Linux operating system: Linux Support Policy

For servlet containers: Servlet Container Support Policy

Installation

Java

To start our Jenkins installation, we first need to check which version of Java is installed on our server. Since Jenkins relies heavily on Java, we need to get this right from the start before continuing. Many Linux distributions (including Ubuntu) does not include Java by default, and some versions are not compatible with Jenkins. So determine which version of Java – if at all – is installed, we will use the java –version command:

As you can see we do not have any version of Java installed, so we will proceed with installing Openjdk-11-jre on this server.

$ sudo apt update
$ sudo apt install openjdk-11-jre

Jenkins

On Ubuntu, we can install Jenkins through apt. We can install either the LTS release or Weekly release. The LTS release is chosen every 12 weeks from the stream of regular releases as the stable release for that time period.

$ sudo curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee
/usr/share/keyrings/jenkins-keyring.asc > /dev/null
$ sudo echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]
https://pkg.jenkins.io/debian-stable binary/ | sudo tee
/etc/apt/sources.list.d/jenkins.list > /dev/null
$ sudo apt update
$ sudo install jenkins

The package installation will:

Setup Jenkins as a daemon launched on start. Run systemctl cat jenkins for more details.

Create a ‘jenkins’ user to run this service.

Direct console log output to systemd-journald. Run journalctl -u jenkins.service if you are troubleshooting Jenkins.

Populate /lib/systemd/system/jenkins.service with configuration parameters for the launch, e.g JENKINS_HOME

Set Jenkins to listen on port 8080. Access this port with your browser to start configuration.

If Jenkins fails to start because of a port conflict, you can run systemctl edit jenkins and add the following:

[Service]
Environment=”JENKINS_PORT=8081″
You can use any port you have open.

Now the final two commands you need to run:

$ sudo service jenkins start
$ sudo systemctl enable jenkins

This is it.
Now you can browse to your Jenkins installation on port :8080

Activate Jenkins and initial setup

When browsing to your Jenkins install, you will be met with the Unlock Jenkins screen:

Simply run this command to get your AdminPassword:

$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Once you have completed this step, you will see the following screen:

For now, you can just install the selected plugins. After this you will be required to create the Admin password for the login. And then you are set.
You can install further plugins as per your needs to enable all the required services you will use.

The post Install Jenkins on Ubuntu appeared first on Sleuth Technical.