<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Untitled Publication]]></title><description><![CDATA[Untitled Publication]]></description><link>https://blog.automation-dev.us</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 08:41:04 GMT</lastBuildDate><atom:link href="https://blog.automation-dev.us/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Dispatch - Roboshop Project]]></title><description><![CDATA[The code
{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": ...]]></description><link>https://blog.automation-dev.us/dispatch-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/dispatch-roboshop-project</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Mon, 10 Mar 2025 04:00:07 GMT</pubDate><content:encoded><![CDATA[<p>The code</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "dispatch"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "dispatch"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<p>Continue...</p>
<p>Success Report</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681751170964/1341c6a3-fcef-4915-a382-430989f5663f.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Payment - Roboshop Project]]></title><description><![CDATA[The code to create the vm
{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
 ...]]></description><link>https://blog.automation-dev.us/payment-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/payment-roboshop-project</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Sun, 09 Mar 2025 05:00:36 GMT</pubDate><content:encoded><![CDATA[<p>The code to create the vm</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "payment"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "payment"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<p>Continue...</p>
]]></content:encoded></item><item><title><![CDATA[RabbitMQ - Roboshop Project]]></title><description><![CDATA[Code to EC2 Instance
{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      ...]]></description><link>https://blog.automation-dev.us/rabbitmq-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/rabbitmq-roboshop-project</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Sat, 08 Mar 2025 05:00:37 GMT</pubDate><content:encoded><![CDATA[<p>Code to EC2 Instance</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "rabbitMQ"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "rabbitMQ"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<p>Continue</p>
]]></content:encoded></item><item><title><![CDATA[Shipping - Roboshop Project]]></title><description><![CDATA[The following code can create the ec2 instance for shipping.
{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      ...]]></description><link>https://blog.automation-dev.us/shipping-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/shipping-roboshop-project</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Fri, 07 Mar 2025 05:00:39 GMT</pubDate><content:encoded><![CDATA[<p>The following code can create the ec2 instance for shipping.</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "shipping"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "shipping"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<p>Continue...</p>
]]></content:encoded></item><item><title><![CDATA[MySQL - Roboshop Project]]></title><description><![CDATA[The following can create ec2 spot instance
{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIp...]]></description><link>https://blog.automation-dev.us/mysql-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/mysql-roboshop-project</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Thu, 06 Mar 2025 05:00:13 GMT</pubDate><content:encoded><![CDATA[<p>The following can create ec2 spot instance</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "mysql"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "mysql"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<p>Continue</p>
]]></content:encoded></item><item><title><![CDATA[User - Roboshop Project]]></title><description><![CDATA[The "User" microservice is responsible for handling user logins and registrations in the RobotShop e-commerce portal. It was developed using Node.js and requires a version above 18.
The following code can create the Ec2 instance user for us
{
  "MaxC...]]></description><link>https://blog.automation-dev.us/user-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/user-roboshop-project</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Wed, 05 Mar 2025 05:00:21 GMT</pubDate><content:encoded><![CDATA[<p>The "User" microservice is responsible for handling user logins and registrations in the RobotShop e-commerce portal. It was developed using Node.js and requires a version above 18.</p>
<p>The following code can create the Ec2 instance user for us</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "user"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "user"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<p>To set up the Node.js repositories, the vendor provides a script that can be run using the command:</p>
<pre><code class="lang-plaintext">curl -sL https://rpm.nodesource.com/setup_lts.x | bash
</code></pre>
<p>After that, Node.js can be installed using</p>
<pre><code class="lang-plaintext">yum install nodejs -y
</code></pre>
<p>Since the application has no RPM software, every step needs to be configured manually.</p>
<p>To follow best practices, the application is set up to run as a non-root user. The application user <code>useradd roboshop</code> is created and will only be used to run the application, not to log in to the server. The application is kept in a standard location and an app directory is set up using the command</p>
<pre><code class="lang-plaintext">mkdir /app
</code></pre>
<p>The application code is then downloaded using</p>
<pre><code class="lang-plaintext">curl -L -o /tmp/user.zip https://roboshop-artifacts.s3.amazonaws.com/user.zip 
unzip /tmp/user.zip
</code></pre>
<p>Like most applications, this one has some common dependencies, which can be downloaded using <code>npm install</code> in the app directory. A new service is then set up in systemd using the following configuration file located at</p>
<p>vim /etc/systemd/system/user.service</p>
<p>Add the following;</p>
<pre><code class="lang-plaintext">[Unit] 
Description = User Service 
[Service] 
User=roboshop Environment=MONGO=true Environment=REDIS_HOST=Enter-Redis-IP
Environment=MONGO_URL="mongodb://MONGODB-IP:27017/users" 
ExecStart=/bin/node /app/server.js SyslogIdentifier=user

[Install] 
WantedBy=multi-user.target
</code></pre>
<p>The service can be loaded using</p>
<pre><code class="lang-plaintext">systemctl daemon-reload
</code></pre>
<p>The schema must be loaded into the database to fully enable the application's functionality. This can be done by installing the MongoDB client using the following repo file located at</p>
<pre><code class="lang-plaintext">vim /etc/yum.repos.d/mongo.repo
</code></pre>
<pre><code class="lang-plaintext">[mongodb-org-4.2] 
name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.2/x86_64/ 
gpgcheck=0 
enabled=1
</code></pre>
<pre><code class="lang-plaintext">yum install mongodb-org-shell -y
</code></pre>
<p>Then, the schema can be loaded using</p>
<pre><code class="lang-plaintext">mongo --host MONGODB-SERVER-IPADDRESS &lt;/app/schema/user.js
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Redis - Roboshop Project]]></title><description><![CDATA[The Code to create EC2 T2.Small server.
{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t2.small",
  "EbsOptimized": false,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAd...]]></description><link>https://blog.automation-dev.us/redis-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/redis-roboshop-project</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Tue, 04 Mar 2025 05:00:32 GMT</pubDate><content:encoded><![CDATA[<p>The Code to create EC2 T2.Small server.</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t2.small",
  "EbsOptimized": false,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "redis"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "redis"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<h2 id="heading-redis-configuration">Redis Configuration</h2>
<p>This script sets up Redis, which is used for in-memory data storage (caching) and allows users to access the data of a database over an API.</p>
<p><mark>To begin, the developer's chosen version of the DB software should be confirmed. Redis provides a repo file as an RPM, so the first step is to install it using the following command:</mark></p>
<pre><code class="lang-plaintext">yum install https://rpms.remirepo.net/enterprise/remi-release-8.rpm -y
</code></pre>
<p>Next, Redis 6.2 should be enabled from package streams using this command:</p>
<pre><code class="lang-plaintext">dnf module enable redis:remi-6.2 -y
</code></pre>
<p>After that, Redis can be installed with the following command:</p>
<pre><code class="lang-plaintext">yum install redis -y
</code></pre>
<p>By default, Redis only opens the port to the <a target="_blank" href="http://localhost">localhost</a> (127.0.0.1), which means that this service can only be accessed by the application that is hosted on this server. However, if we need to access this service from another server, we must modify the configuration accordingly. Specifically, we need to update the listen address from 127.0.0.1 to 0.0.0.0 in both /etc/redis.conf and /etc/redis/redis.conf.</p>
<p>Once the Redis configuration has been updated, we can start and enable the Redis service with these commands:</p>
<pre><code class="lang-plaintext">systemctl enable redis 
systemctl start redis
</code></pre>
]]></content:encoded></item><item><title><![CDATA[MongoDB - Roboshop Project]]></title><description><![CDATA[we start mongoDB now
Instance Code
{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress"...]]></description><link>https://blog.automation-dev.us/mongodb-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/mongodb-roboshop-project</guid><category><![CDATA[Multi-Layer-Application-Deployment]]></category><category><![CDATA[MongoDB]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Roboshop-Project]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Mon, 03 Mar 2025 05:00:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/cijiWIwsMB8/upload/7f3963267081be281be2e141251588c5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>we start mongoDB now</p>
<p>Instance Code</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "frontend"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "frontend"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<h2 id="heading-mongodb-installation-and-configuration"><strong>MongoDB Installation and Configuration</strong></h2>
<p>In RoboShop, the database management system MongoDB has been chosen to store and manage data. Here's how to install and configure it:</p>
<h3 id="heading-step-1-set-up-mongodb-repository"><strong>Step 1: Set up MongoDB repository</strong></h3>
<p>Before installing MongoDB, we need to set up its repository. Create a new file called <code>mongo.repo</code> in the directory <code>/etc/yum.repos.d/</code> using the following command:</p>
<pre><code class="lang-plaintext">sudo vim /etc/yum.repos.d/mongo.repo
</code></pre>
<p>Then, add the following lines to the file:</p>
<pre><code class="lang-plaintext">[mongodb-org-4.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.2/x86_64/
gpgcheck=0
enabled=1
</code></pre>
<h3 id="heading-step-2-install-mongodb"><strong>Step 2: Install MongoDB</strong></h3>
<p>Once the repository is set up, we can install MongoDB using the following command:</p>
<pre><code class="lang-plaintext">sudo yum install mongodb-org -y
</code></pre>
<h3 id="heading-step-3-start-and-enable-mongodb-service"><strong>Step 3: Start and enable MongoDB service</strong></h3>
<p>Start and enable the MongoDB service using the following commands:</p>
<pre><code class="lang-plaintext">sudo systemctl enable mongod
sudo systemctl start mongod
</code></pre>
<h3 id="heading-step-4-update-mongodb-configuration"><strong>Step 4: Update MongoDB configuration</strong></h3>
<p>By default, MongoDB only listens to connections from the local machine (<a target="_blank" href="http://localhost">localhost</a>). In order to allow connections from other servers, we need to update the MongoDB configuration file.</p>
<p>Open the configuration file <code>/etc/mongod.conf</code> in your text editor:</p>
<pre><code class="lang-plaintext">sudo vim /etc/mongod.conf
</code></pre>
<p>Find the line that starts with <code>bindIp</code> and change the value from <code>127.0.0.1</code> to <code>0.0.0.0</code>.</p>
<pre><code class="lang-plaintext"># network interfaces
net:
  port: 27017
  bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.
</code></pre>
<h3 id="heading-step-5-restart-mongodb-service"><strong>Step 5: Restart MongoDB service</strong></h3>
<p>After updating the MongoDB configuration file, restart the MongoDB service to apply the changes:</p>
<pre><code class="lang-plaintext">sudo systemctl restart mongod
</code></pre>
<p>That's it! MongoDB is now installed and configured to accept connections from other servers.</p>
]]></content:encoded></item><item><title><![CDATA[Cart - Roboshop Project]]></title><description><![CDATA[The code to create the EC2 Instance.
{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddres...]]></description><link>https://blog.automation-dev.us/cart-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/cart-roboshop-project</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Sun, 02 Mar 2025 05:00:12 GMT</pubDate><content:encoded><![CDATA[<p>The code to create the EC2 Instance.</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "cart"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "cart"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<p>Continue</p>
]]></content:encoded></item><item><title><![CDATA[User/Group Management and Permissions/Ownership in Linux]]></title><description><![CDATA[Linux, as a multi-user operating system, requires effective user and group management to maintain system security and control access to resources. This document outlines key concepts and practices in user/group management and file permissions/ownersh...]]></description><link>https://blog.automation-dev.us/usergroup-management-and-permissionsownership-in-linux</link><guid isPermaLink="true">https://blog.automation-dev.us/usergroup-management-and-permissionsownership-in-linux</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Sat, 01 Mar 2025 05:00:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4Mw7nkQDByk/upload/83a7d5efdf6cf2d850cab82ee6b432f1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Linux, as a multi-user operating system, requires effective user and group management to maintain system security and control access to resources. This document outlines key concepts and practices in user/group management and file permissions/ownership.</p>
<h2 id="heading-user-management"><strong>User Management</strong></h2>
<h2 id="heading-types-of-users"><strong>Types of Users</strong></h2>
<ol>
<li><p><strong>Root User</strong>: Superuser with unrestricted system access. Home directory: <code>/root</code>.</p>
</li>
<li><p><strong>Regular Users</strong>: Non-privileged users with home directories in <code>/home/username</code>.</p>
</li>
</ol>
<h2 id="heading-creating-users"><strong>Creating Users</strong></h2>
<pre><code class="lang-bash">bashsudo useradd john
sudo passwd john
</code></pre>
<h2 id="heading-user-configuration-files"><strong>User Configuration Files</strong></h2>
<ul>
<li><p><code>/etc/passwd</code>: User account information</p>
</li>
<li><p><code>/etc/shadow</code>: Encrypted passwords and account expiration data</p>
</li>
<li><p><code>/etc/group</code>: Group information</p>
</li>
</ul>
<h2 id="heading-group-management"><strong>Group Management</strong></h2>
<p>Groups facilitate managing permissions for multiple users simultaneously.</p>
<h2 id="heading-creating-groups-and-adding-users"><strong>Creating Groups and Adding Users</strong></h2>
<pre><code class="lang-bash">bashsudo groupadd developers
sudo usermod -aG developers john
</code></pre>
<h2 id="heading-key-management-commands"><strong>Key Management Commands</strong></h2>
<ul>
<li><p><code>useradd</code>: Add new user</p>
</li>
<li><p><code>passwd</code>: Set/change user password</p>
</li>
<li><p><code>usermod</code>: Modify user account</p>
</li>
<li><p><code>groupadd</code>: Add new group</p>
</li>
<li><p><code>groups</code>: Display user's group memberships</p>
</li>
<li><p><code>deluser</code>: Delete user</p>
</li>
<li><p><code>delgroup</code>: Delete group</p>
</li>
</ul>
<h2 id="heading-permissions-and-ownership"><strong>Permissions and Ownership</strong></h2>
<p>Linux uses a permission model to control file and directory access.</p>
<h2 id="heading-file-ownership"><strong>File Ownership</strong></h2>
<ul>
<li><p>Owner: User who owns the file</p>
</li>
<li><p>Group: Group that owns the file</p>
</li>
</ul>
<p>Change ownership:</p>
<pre><code class="lang-bash">bashsudo chown user:group filename
</code></pre>
<h2 id="heading-file-permissions"><strong>File Permissions</strong></h2>
<p>Permissions are represented as:</p>
<pre><code class="lang-bash">text-rwxr-xr--
</code></pre>
<ul>
<li><p>First character: File type</p>
</li>
<li><p>Next three: Owner permissions</p>
</li>
<li><p>Following three: Group permissions</p>
</li>
<li><p>Last three: Others' permissions</p>
</li>
</ul>
<p>Permissions include:</p>
<ul>
<li><p><code>r</code>: Read</p>
</li>
<li><p><code>w</code>: Write</p>
</li>
<li><p><code>x</code>: Execute</p>
</li>
</ul>
<h2 id="heading-changing-file-permissions"><strong>Changing File Permissions</strong></h2>
<p>Using <code>chmod</code> command:</p>
<p>Symbolic mode:</p>
<pre><code class="lang-bash">bashchmod u+rwx filename
chmod g-w filename
chmod o=rx filename
</code></pre>
<p>Numeric mode:</p>
<pre><code class="lang-bash">bashchmod 755 filename
chmod 644 filename
</code></pre>
<h2 id="heading-practical-scenarios"><strong>Practical Scenarios</strong></h2>
<ol>
<li><p><strong>Creating a New User and Assigning to a Group</strong></p>
<pre><code class="lang-bash"> bashsudo useradd john
 sudo passwd john
 sudo groupadd developers
 sudo usermod -aG developers john
 groups john
</code></pre>
</li>
<li><p><strong>Changing File Ownership and Permissions</strong></p>
<pre><code class="lang-bash"> bashtouch example.txt
 sudo chown john:developers example.txt
 chmod 754 example.txt
 ls -l example.txt
</code></pre>
</li>
</ol>
<h2 id="heading-comprehensive-example"><strong>Comprehensive Example</strong></h2>
<pre><code class="lang-bash">bashsudo useradd alice
sudo passwd alice
sudo groupadd engineers
sudo usermod -aG engineers alice
groups alice
touch project.txt
sudo chown alice:engineers project.txt
chmod 664 project.txt
ls -l project.txt
</code></pre>
<p>This comprehensive guide provides a solid foundation for managing users, groups, and file permissions in Linux environments, essential for maintaining system security and access control.</p>
]]></content:encoded></item><item><title><![CDATA[Catalogue - Roboshop Project]]></title><description><![CDATA[The cataloge code
{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "Su...]]></description><link>https://blog.automation-dev.us/catalogue-roboshop-project</link><guid isPermaLink="true">https://blog.automation-dev.us/catalogue-roboshop-project</guid><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Sat, 01 Mar 2025 05:00:34 GMT</pubDate><content:encoded><![CDATA[<p>The cataloge code</p>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "catalogue"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "catalogue"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<h2 id="heading-configuring-catalogue">Configuring Catalogue</h2>
<p>The Catalogue microservice is responsible for serving the list of items that display in the RoboShop application.</p>
<p>As per the developer's preference, Node.js has been selected as the development language, with Node.js version greater than 18. To set up the Node.js environment, we need to first set up the Node.js repository by running the following command:</p>
<pre><code class="lang-bash">curl -sL https://rpm.nodesource.com/setup_lts.x | bash
</code></pre>
<p>After setting up the repository, we can proceed with the installation of Node.js by running the command:</p>
<pre><code class="lang-bash">yum install nodejs -y
</code></pre>
<p>Next, we need to configure the application. Since the application does not have an RPM package, we will have to configure it manually.</p>
<p>As per standard practice, applications should run as non-root users. Therefore, we need to create a user for the Catalogue service by running the command:</p>
<pre><code class="lang-bash">useradd roboshop
</code></pre>
<p>We also need to create a standard application directory by running the command:</p>
<pre><code class="lang-bash">mkdir /app
</code></pre>
<p>We can download the application code to the newly created directory by running the command:</p>
<pre><code class="lang-bash">curl -o /tmp/catalogue.zip https://roboshop-artifacts.s3.amazonaws.com/catalogue.zip
<span class="hljs-built_in">cd</span> /app
unzip /tmp/catalogue.zip
</code></pre>
<p>After downloading the application code, we need to download the application's dependencies by running the command:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /app
npm install
</code></pre>
<p>Next, we need to set up a new service in SystemD so that systemctl can manage this service. We can do this by creating a service file at the following path:</p>
<pre><code class="lang-bash">/etc/systemd/system/catalogue.service
</code></pre>
<p>We can use the following template for the Catalogue service:</p>
<pre><code class="lang-makefile">[Unit]
Description=Catalogue Service

[Service]
User=roboshop
Environment=MONGO=true
Environment=MONGO_URL=<span class="hljs-string">"mongodb://&lt;MONGODB-SERVER-IPADDRESS&gt;:27017/catalogue"</span>
ExecStart=/bin/node /app/server.js
SyslogIdentifier=catalogue

[Install]
WantedBy=multi-user.target
</code></pre>
<p>We must ensure that we replace the <code>&lt;MONGODB-SERVER-IPADDRESS&gt;</code> placeholder with the IP address of the MongoDB server. After creating the service file, we must reload the SystemD daemon to detect the new service by running the command:</p>
<pre><code class="lang-plaintext">systemctl daemon-reload
</code></pre>
<p>Finally, we can start the Catalogue service by running the following commands:</p>
<pre><code class="lang-plaintext">systemctl enable catalogue
systemctl start catalogue
</code></pre>
<p>To load the schema for the application, we must first install the MongoDB client by setting up the MongoDB repository and running the following command:</p>
<pre><code class="lang-plaintext">yum install mongodb-org-shell -y
</code></pre>
<p>After installing the MongoDB client, we can load the schema by running the following command:</p>
<pre><code class="lang-javascript">mongo --host &lt;MONGODB-SERVER-IPADDRESS&gt; &lt;<span class="hljs-regexp">/app/</span>schema/catalogue.js
</code></pre>
<p>Finally, we must update the frontend configuration by updating the IP address of the Catalogue server in the <code>/etc/nginx/default.d/roboshop.conf</code> configuration file.</p>
]]></content:encoded></item><item><title><![CDATA[10 Corporate Real-Time Shell Scripts]]></title><description><![CDATA[Backup Script Script
SOURCE="/home/ubuntu/Test01"
DESTINATION="/home/ubuntu/Test02/"
DATE=$(date +%Y-%m-%d_%H-%M-%S)

# Create backup directory and copy files

mkdir -p $DESTINATION/$DATE
cp -r $SOURCE $DESTINATION/$DATE
echo "Backup completed on $DA...]]></description><link>https://blog.automation-dev.us/10-corporate-real-time-shell-scripts</link><guid isPermaLink="true">https://blog.automation-dev.us/10-corporate-real-time-shell-scripts</guid><category><![CDATA[ #BackupScript]]></category><category><![CDATA[DevOps tools]]></category><category><![CDATA[shell scripting]]></category><category><![CDATA[Data Backup]]></category><category><![CDATA[cronjob]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Fri, 28 Feb 2025 17:18:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/PhYq704ffdA/upload/0a519803de08e649ba3f92a228779cda.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-backup-script-script">Backup Script Script</h3>
<pre><code class="lang-bash">SOURCE=<span class="hljs-string">"/home/ubuntu/Test01"</span>
DESTINATION=<span class="hljs-string">"/home/ubuntu/Test02/"</span>
DATE=$(date +%Y-%m-%d_%H-%M-%S)

<span class="hljs-comment"># Create backup directory and copy files</span>

mkdir -p <span class="hljs-variable">$DESTINATION</span>/<span class="hljs-variable">$DATE</span>
cp -r <span class="hljs-variable">$SOURCE</span> <span class="hljs-variable">$DESTINATION</span>/<span class="hljs-variable">$DATE</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Backup completed on <span class="hljs-variable">$DATE</span>"</span>
</code></pre>
<p>Explanation</p>
<p>• <strong>SOURCE</strong>: The directory to be backed up. • DESTINATION: The directory where the backup will be stored. • <strong>DATE</strong>: Captures the current date and time to create a unique backup folder.</p>
<p>• <code>mkdir -p $DESTINATION/$DATE</code>: Creates the backup directory if it does not exist.</p>
<p>• <code>cp -r $SOURCE $DESTINATION/$DATE</code>: Copies the contents of the source directory to the backup directory.</p>
<p>• <code>echo "Backup completed on $DATE"</code>: Outputs a message indicating the completion of the backup.</p>
<h3 id="heading-scheduling-the-backup-with-cron">Scheduling the backup with Cron</h3>
<p>To schedule regular execution of the backup script, utilize the crontab editor by running the following command:</p>
<p><code>crontab -e</code></p>
<p>Once in the editor, add the following line to configure the backup schedule:</p>
<pre><code class="lang-bash">text* * * * * /path/to/backup_script.sh
</code></pre>
<p>This configuration will execute the backup script every minute<a target="_blank" href="https://www.windmill.dev/blog/edit-crontabs">5</a>. Modify the cron schedule parameters to align with your desired backup frequency.</p>
<h2 id="heading-disk-usage-monitoring-script"><strong>Disk Usage Monitoring Script</strong></h2>
<h2 id="heading-script-overview"><strong>Script Overview</strong></h2>
<p>This Bash script monitors disk usage across partitions and issues warnings when usage exceeds a predefined threshold.</p>
<pre><code class="lang-bash">bash<span class="hljs-comment">#!/bin/bash</span>

THRESHOLD=80

df -H | grep -vE <span class="hljs-string">'^Filesystem|tmpfs|cdrom'</span> | awk <span class="hljs-string">'{ print $5 " " $1 }'</span> | <span class="hljs-keyword">while</span> <span class="hljs-built_in">read</span> output;
<span class="hljs-keyword">do</span>
    usage=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$output</span> | awk <span class="hljs-string">'{ print $1}'</span> | cut -d<span class="hljs-string">'%'</span> -f1)
    partition=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$output</span> | awk <span class="hljs-string">'{ print $2 }'</span>)
    <span class="hljs-keyword">if</span> [ <span class="hljs-variable">$usage</span> -ge <span class="hljs-variable">$THRESHOLD</span> ]; <span class="hljs-keyword">then</span>
        <span class="hljs-built_in">echo</span> <span class="hljs-string">"Warning: Disk usage on <span class="hljs-variable">$partition</span> is at <span class="hljs-variable">${usage}</span>%"</span>
    <span class="hljs-keyword">fi</span>
<span class="hljs-keyword">done</span>
</code></pre>
<h2 id="heading-functionality-breakdown"><strong>Functionality Breakdown</strong></h2>
<ol>
<li><p><strong>Threshold Setting</strong>: The script initializes with a disk usage threshold of 80%.</p>
</li>
<li><p><strong>Disk Usage Data Collection</strong>: Utilizes <code>df -H</code> to retrieve disk usage information in a human-readable format.</p>
</li>
<li><p><strong>Data Filtering</strong>: Employs <code>grep</code> to exclude non-essential filesystem entries.</p>
</li>
<li><p><strong>Data Extraction</strong>: Uses <code>awk</code> to isolate usage percentages and partition names.</p>
</li>
<li><p><strong>Iterative Processing</strong>: Processes each filtered entry using a while loop.</p>
</li>
<li><p><strong>Usage Calculation</strong>: Extracts the numerical usage percentage from each entry.</p>
</li>
<li><p><strong>Partition Identification</strong>: Isolates the partition name for each entry.</p>
</li>
<li><p><strong>Threshold Comparison</strong>: Compares the usage against the predefined threshold.</p>
</li>
<li><p><strong>Alert Generation</strong>: Outputs a warning message for partitions exceeding the threshold.</p>
</li>
</ol>
<h2 id="heading-service-health-check"><strong>Service Health Check</strong></h2>
<p>This script checks if a specified service is running and starts it if not.</p>
<pre><code class="lang-bash">bash<span class="hljs-comment">#!/bin/bash</span>

SERVICE=<span class="hljs-string">"nginx"</span>

<span class="hljs-keyword">if</span> systemctl is-active --quiet <span class="hljs-variable">$SERVICE</span>; <span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$SERVICE</span> is running"</span>
<span class="hljs-keyword">else</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$SERVICE</span> is not running"</span>
    systemctl start <span class="hljs-variable">$SERVICE</span>
<span class="hljs-keyword">fi</span>
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><code>SERVICE</code>: Specifies the name of the service to check (nginx in this example).</p>
</li>
<li><p><code>systemctl is-active --quiet $SERVICE</code>: Checks if the service is running.</p>
</li>
<li><p>If the service is running, it prints a confirmation message.</p>
</li>
<li><p>If it is not running, it prints a message and attempts to start the service.</p>
</li>
</ul>
<h2 id="heading-network-connectivity-check"><strong>Network Connectivity Check</strong></h2>
<p>This script checks network connectivity to a specified host.</p>
<pre><code class="lang-bash">bash<span class="hljs-comment">#!/bin/bash</span>

HOST=<span class="hljs-string">"google.com"</span>
OUTPUT_FILE=<span class="hljs-string">"/home/ubuntu/output.txt"</span>

<span class="hljs-keyword">if</span> ping -c 1 <span class="hljs-variable">$HOST</span> &amp;&gt; /dev/null
<span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$HOST</span> is reachable"</span> &gt;&gt; <span class="hljs-variable">$OUTPUT_FILE</span>
<span class="hljs-keyword">else</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$HOST</span> is not reachable"</span> &gt;&gt; <span class="hljs-variable">$OUTPUT_FILE</span>
<span class="hljs-keyword">fi</span>
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><code>HOST</code>: Specifies the hostname to check.</p>
</li>
<li><p><code>OUTPUT_FILE</code>: Defines where to write the output.</p>
</li>
<li><p><code>ping -c 1 $HOST &amp;&gt; /dev/null</code>: Pings the host once, suppressing output.</p>
</li>
<li><p>Depending on the ping result, it writes a reachability status to the output file.</p>
</li>
</ul>
<h2 id="heading-database-backup"><strong>Database Backup</strong></h2>
<p>This script creates a backup of a specified MySQL database.</p>
<pre><code class="lang-bash">bash<span class="hljs-comment">#!/bin/bash</span>

DB_NAME=<span class="hljs-string">"mydatabase"</span>
BACKUP_DIR=<span class="hljs-string">"/path/to/backup"</span>
DATE=$(date +%Y-%m-%d_%H-%M-%S)

mysqldump -u root -p <span class="hljs-variable">$DB_NAME</span> &gt; <span class="hljs-variable">$BACKUP_DIR</span>/<span class="hljs-variable">$DB_NAME</span>-<span class="hljs-variable">$DATE</span>.sql

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Database backup completed: <span class="hljs-variable">$BACKUP_DIR</span>/<span class="hljs-variable">$DB_NAME</span>-<span class="hljs-variable">$DATE</span>.sql"</span>
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><code>DB_NAME</code>: Specifies the database to back up.</p>
</li>
<li><p><code>BACKUP_DIR</code>: Defines where to store the backup.</p>
</li>
<li><p><code>DATE</code>: Captures the current date and time for a unique filename.</p>
</li>
<li><p><code>mysqldump</code> command creates a SQL dump of the database.</p>
</li>
<li><p>The echo statement confirms the backup completion and location.</p>
</li>
</ul>
<h2 id="heading-system-uptime-check"><strong>System Uptime Check</strong></h2>
<p>This simple script displays the system's uptime.</p>
<pre><code class="lang-bash">bash<span class="hljs-comment">#!/bin/bash</span>

uptime -p
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><code>uptime -p</code>: Prints the system uptime in a human-readable format.</li>
</ul>
<h2 id="heading-listening-ports-monitor"><strong>Listening Ports Monitor</strong></h2>
<p>This script lists all listening ports and their associated services.</p>
<pre><code class="lang-bash">bash<span class="hljs-comment">#!/bin/bash</span>

netstat -tuln | grep LISTEN
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><code>netstat -tuln</code>: Lists all TCP and UDP listening ports.</p>
</li>
<li><p><code>grep LISTEN</code>: Filters the output to show only listening ports.</p>
</li>
</ul>
<h2 id="heading-automatic-package-updates"><strong>Automatic Package Updates</strong></h2>
<p>This script updates and cleans up system packages.</p>
<pre><code class="lang-bash">bash<span class="hljs-comment">#!/bin/bash</span>

apt-get update &amp;&amp; apt-get upgrade -y &amp;&amp; apt-get autoremove -y &amp;&amp; apt-get clean
<span class="hljs-built_in">echo</span> <span class="hljs-string">"System packages updated and cleaned up"</span>
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><code>apt-get update</code>: Updates the package list.</p>
</li>
<li><p><code>apt-get upgrade -y</code>: Upgrades all installed packages.</p>
</li>
<li><p><code>apt-get autoremove -y</code>: Removes unnecessary packages.</p>
</li>
<li><p><code>apt-get clean</code>: Cleans up the package cache.</p>
</li>
<li><p>The echo statement confirms the completion of updates and cleanup.</p>
</li>
</ul>
<h2 id="heading-http-response-time-monitor"><strong>HTTP Response Time Monitor</strong></h2>
<p>This script checks HTTP response times for specified URLs.</p>
<pre><code class="lang-bash">bash<span class="hljs-comment">#!/bin/bash</span>

URLS=(<span class="hljs-string">"https://www.devopsshack.com/"</span> <span class="hljs-string">"https://www.linkedin.com/"</span>)

<span class="hljs-keyword">for</span> URL <span class="hljs-keyword">in</span> <span class="hljs-string">"<span class="hljs-variable">${URLS[@]}</span>"</span>; <span class="hljs-keyword">do</span>
    RESPONSE_TIME=$(curl -o /dev/null -s -w <span class="hljs-string">'%{time_total}\n'</span> <span class="hljs-variable">$URL</span>)
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Response time for <span class="hljs-variable">$URL</span>: <span class="hljs-variable">$RESPONSE_TIME</span> seconds"</span>
<span class="hljs-keyword">done</span>
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><code>URLS</code>: An array of URLs to check.</p>
</li>
<li><p>The for loop iterates over each URL.</p>
</li>
<li><p><code>curl</code> command fetches each URL and measures the total response time.</p>
</li>
<li><p>The script prints the response time for each URL.</p>
</li>
</ul>
<h2 id="heading-system-process-and-memory-usage-monitor"><strong>System Process and Memory Usage Monitor</strong></h2>
<p>This script displays the top processes by memory usage.</p>
<pre><code class="lang-bash">bash<span class="hljs-comment">#!/bin/bash</span>

ps aux --sort=-%mem | head -n 10
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><code>ps aux</code>: Lists all running processes.</p>
</li>
<li><p><code>--sort=-%mem</code>: Sorts processes by memory usage in descending order.</p>
</li>
<li><p><code>head -n 10</code>: Displays only the top 10 processes.</p>
</li>
</ul>
<p>These scripts provide valuable tools for various DevOps tasks, from system monitoring to backup and maintenance operations.</p>
]]></content:encoded></item><item><title><![CDATA[Frontend Roboshop]]></title><description><![CDATA[-> Name -> Choose Spot -> devops-practice - Centos-8-Devops-Practice -> Choose tl.micro -> Choose Procced without Keypair -> Network Settintgs-> Choose allow-all security group -> Advanced -> Request Spet Instances > Customize -> Request Type (Persis...]]></description><link>https://blog.automation-dev.us/frontend-roboshop</link><guid isPermaLink="true">https://blog.automation-dev.us/frontend-roboshop</guid><category><![CDATA[Roboshop-Project]]></category><category><![CDATA[#aws projects]]></category><category><![CDATA[frontend]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Fri, 28 Feb 2025 16:27:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/tvHtIGbbjMo/upload/88b713676feef16198612073fd9dbe31.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>-&gt; Name -&gt; Choose Spot -&gt; devops-practice - Centos-8-Devops-Practice -&gt; Choose tl.micro -&gt; Choose Procced without Keypair -&gt; Network Settintgs-&gt; Choose allow-all security group -&gt; Advanced -&gt; Request Spet Instances &gt; Customize -&gt; Request Type (Persistent)-&gt; Interruption Behaviour (stop)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681437094300/d3d2f785-0a40-455f-ab4d-a1ddd15770a1.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681437173652/ba9203c2-c302-4ea3-9a8b-b64dcb354564.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681437216655/490646b0-5ba6-4dcb-a19a-1667209bdfa6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681437291352/258bfae9-036d-4e15-b5a1-419b768a02c9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681437316655/ebdbb90a-6ae8-4100-84e9-bf4bcea9b3d3.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681437364059/5328de19-4782-479f-b70e-4f410d2e591c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-instance-setup-code">Instance Setup Code</h3>
<pre><code class="lang-plaintext">{
  "MaxCount": 1,
  "MinCount": 1,
  "ImageId": "ami-0089b8e98cd95257d",
  "InstanceType": "t3.micro",
  "EbsOptimized": true,
  "NetworkInterfaces": [
    {
      "DeviceIndex": 0,
      "AssociatePublicIpAddress": true,
      "SubnetId": "subnet-086a045ade7eae99f",
      "Groups": [
        "sg-0c2e3fdd288a74d3a"
      ]
    }
  ],
  "TagSpecifications": [
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Name",
          "Value": "frontend"
        }
      ]
    },
    {
      "ResourceType": "spot-instances-request",
      "Tags": [
        {
          "Key": "Name",
          "Value": "frontend"
        }
      ]
    }
  ],
  "InstanceMarketOptions": {
    "MarketType": "spot",
    "SpotOptions": {
      "InstanceInterruptionBehavior": "stop",
      "SpotInstanceType": "persistent"
    }
  },
  "PrivateDnsNameOptions": {
    "HostnameType": "ip-name",
    "EnableResourceNameDnsARecord": true,
    "EnableResourceNameDnsAAAARecord": false
  }
}
</code></pre>
<h2 id="heading-configuring-server">Configuring Server</h2>
<p>The frontend service in RoboShop is responsible for serving the web content over Nginx. This service includes the web frame for the web application and serves static content. To accomplish this, a web server is needed. In this case, the developer has chosen Nginx Web Server, which we will install by running the following command:</p>
<pre><code class="lang-plaintext">yum install nginx -y
</code></pre>
<p>After installation, we need to start and enable the Nginx service:</p>
<pre><code class="lang-plaintext">systemctl enable nginx 
systemctl start nginx
</code></pre>
<p>Once the service is up and running, we can access it through a browser to ensure that default content is being served. To remove the default content, run:</p>
<pre><code class="lang-plaintext">rm -rf /usr/share/nginx/html/*
</code></pre>
<p>Next, we need to download the frontend content by running:</p>
<pre><code class="lang-plaintext">curl -o /tmp/frontend.zip https://roboshop-artifacts.s3.amazonaws.com/frontend.zip
</code></pre>
<p>The content can then be extracted using the following command:</p>
<pre><code class="lang-plaintext">cd /usr/share/nginx/html 
unzip /tmp/frontend.zip
</code></pre>
<p>After the frontend content has been extracted, we need to create an Nginx Reverse Proxy Configuration file. Open the file using the following command:</p>
<pre><code class="lang-plaintext">vim /etc/nginx/default.d/roboshop.conf
</code></pre>
<p>Add the following content to the file:</p>
<pre><code class="lang-plaintext">proxy_http_version 1.1;
location /images/ {
  expires 5s;
  root   /usr/share/nginx/html;
  try_files $uri /images/placeholder.jpg;
}
location /api/catalogue/ { proxy_pass http://localhost:8080/; }
location /api/user/ { proxy_pass http://localhost:8080/; }
location /api/cart/ { proxy_pass http://localhost:8080/; }
location /api/shipping/ { proxy_pass http://localhost:8080/; }
location /api/payment/ { proxy_pass http://localhost:8080/; }

location /health {
  stub_status on;
  access_log off;
}
</code></pre>
<p>Note that the <a target="_blank" href="http://localhost"><code>localhost</code></a> should be replaced with the actual IP address of the component server to avoid failures on the Nginx Server.</p>
<p>Finally, restart the Nginx service to load the changes to the configuration file:</p>
<pre><code class="lang-plaintext">systemctl restart nginx
</code></pre>
<p>Frontend of the Roboshop is set up now.</p>
]]></content:encoded></item><item><title><![CDATA[DevOps Project 1 | GitHub+Jenkins+Docker+ECR+Kubernetes]]></title><description><![CDATA[Project Summary
Prerequisite:

GitHub Account

EC2 Instance

Jenkins Server

Docker

Kubernetes

ECR


GitHub is a web-based platform that hosts Git repositories and offers additional features such as issue tracking, pull requests, and code reviews. ...]]></description><link>https://blog.automation-dev.us/devops-project-1-githubjenkinsdockerecrkubernetes</link><guid isPermaLink="true">https://blog.automation-dev.us/devops-project-1-githubjenkinsdockerecrkubernetes</guid><category><![CDATA[ecommerce]]></category><category><![CDATA[shopify]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Fri, 28 Feb 2025 16:16:33 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-project-summary">Project Summary</h2>
<h2 id="heading-prerequisite">Prerequisite:</h2>
<ul>
<li><h3 id="heading-github-account">GitHub Account</h3>
</li>
<li><h3 id="heading-ec2-instance">EC2 Instance</h3>
</li>
<li><h3 id="heading-jenkins-server">Jenkins Server</h3>
</li>
<li><h3 id="heading-docker">Docker</h3>
</li>
<li><h3 id="heading-kubernetes">Kubernetes</h3>
</li>
<li><h3 id="heading-ecr">ECR</h3>
</li>
</ul>
<p><strong>GitHub</strong> is a web-based platform that hosts Git repositories and offers additional features such as issue tracking, pull requests, and code reviews. It allows developers to share and collaborate on their code with others.</p>
<p>We will use git hub in this project to work with the following code can be found in the repository;</p>
<p>\&gt;..</p>
<p>We created an EC2 instance to install all the necessary packages we need.</p>
<p>Note: <strong><em><mark>We use Ubuntu 20.04 for this project, but you can use any Linux OS.</mark></em></strong></p>
<h3 id="heading-git-installation">Git Installation</h3>
<ul>
<li><p>First, we will install git, to install git we will use the following commands;</p>
<p>  <code>sudo apt update</code></p>
<p>  <code>sudo apt install git (in our case git is already installed in Ubuntu 20.04)</code></p>
</li>
<li><p>To test the installation of git</p>
<p>  <code>git --version</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680808978226/e0c67a3f-018f-4b23-bff1-e684758a1901.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-jenkins-installation">Jenkins Installation</h3>
<p>To install Jenkins we will refer to the document <a target="_blank" href="https://www.jenkins.io/doc/book/installing/linux/#debianubuntu">Jenkins Installation Instructions.</a></p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=edmHwUTs9OA">https://www.youtube.com/watch?v=edmHwUTs9OA</a></p>
]]></content:encoded></item><item><title><![CDATA[EC2 (Elastic Compute Cloud) Instance Using AWS CLI through Terminal using Linux]]></title><description><![CDATA[In order to create EC2 instance Login to the AWS Console and get the following information;

Install AWS CLI for Linux on the computer.

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

Install zip/ Unzip to unzip th...]]></description><link>https://blog.automation-dev.us/ec2-elastic-compute-cloud-instance-using-aws-cli-through-terminal-using-linux</link><guid isPermaLink="true">https://blog.automation-dev.us/ec2-elastic-compute-cloud-instance-using-aws-cli-through-terminal-using-linux</guid><category><![CDATA[ec2]]></category><category><![CDATA[ec2 instance types]]></category><category><![CDATA[EC2-linux]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Mon, 03 Apr 2023 01:19:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/LqKhnDzSF-8/upload/77b8b2c7e4c71c60520d5cb9b8a50dd6.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In order to create EC2 instance Login to the AWS Console and get the following information;</p>
<ul>
<li>Install AWS CLI for Linux on the computer.</li>
</ul>
<p><code>curl "</code><a target="_blank" href="https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"><code>https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip</code></a><code>" -o "</code><a target="_blank" href="http://awscliv2.zip"><code>awscliv2.zip</code></a><code>"</code></p>
<ul>
<li>Install zip/ Unzip to unzip the aws cli package</li>
</ul>
<p><code>sudo apt-get install zip (This should also install unzip option too)</code></p>
<ul>
<li>Unzip the file, currently should be in the same directory.</li>
</ul>
<p><code>Unzip</code> <a target="_blank" href="http://awsclive2.zip"><code>awsclive2.zip</code></a></p>
<ul>
<li>Navigate to the file and run the command</li>
</ul>
<p><code>sudo ./aws/install</code></p>
<ul>
<li>Create an Admin user through IAM (Identity Access Management).</li>
</ul>
<ul>
<li><p>To get the Access Key ID</p>
</li>
<li><p>Secret Key</p>
</li>
</ul>
<p>(Note: <mark>Google is your best friend if you don’t know this search for it.</mark>)</p>
<ul>
<li>Need to know the specific region (for this I will be using us-east-1)</li>
</ul>
<ul>
<li>Create keypair through AWS CLI</li>
</ul>
<p><code>aws ec2 create-key-pair --key-name awsclikey --query 'keyMaterial' --output text &gt; awsclikey.pem</code></p>
<p><img src="https://lh6.googleusercontent.com/nvGyYxU7-Em28aqNSUst0IVEtlDS028CGrOFkS3sL1YtfeNwzKMAfGPIzsVUW4N6oD_NlVcdMEaLnwGaV5GGM_V2FrB8B-BElxWK1gBfcHaoVccEH-S_15v09g56jpDyKM-nTPo5E-MGb24UGtPTgdvdeEYa1_eI3et_9e14MnY4r1m79QcCrCxqUJQ8cw" alt /></p>
<ul>
<li>Change the permission of the key to it has read-only access</li>
</ul>
<p><code>chmod 400 awsclikey.pem</code></p>
<ul>
<li>A filter name and value pair that is used to return a more specific list of results from a described operation.</li>
</ul>
<p><code>aws ec2 describe-key-pairs --key-names awsclikey</code></p>
<ul>
<li>We will need a security group to manage the traffic that will be using the EC2 instances.</li>
</ul>
<p><code>aws ec2 create-security-group --group-name awscligroup --description "group created with awscli" --vpc-id vpc-0e4167f02a110a4ed</code></p>
<p><strong><em>Note: <mark>Get the Default VPC Id from the console</mark></em></strong></p>
<p><img src="https://lh5.googleusercontent.com/Rf8FDmDeUcKmrBWUJqxjkYEl6wz38Cl5x-XTZxMwzQQYeW3YfeW2X_jM5EmmBsv-1xjct0VG_ZjHjj3IDY1DbR1hKtkaCRozoBfY9YOFE1PoFTqJB26U1_mtTovbc-dXvlTZx7K9maHx9_O89epjPTv1G3yJ1JRSWEbujccQQ352M058BqQP17ELTNrafg" alt /></p>
<ul>
<li>Define the inbound rules now</li>
</ul>
<p><code>aws ec2 authorize-security-group-ingress --group-id sg-0b2ba2fe2deb772b6 --protocol tcp  --port 0-65535 --cidr 0.0.0.0/0</code></p>
<p><img src="https://lh3.googleusercontent.com/FKDNq6U9_Qh54gW9IAIFOiC2seZQCSkO_C3wHNPGEm23byjD_9B7CSZuK2eHPlOXQJSYou0xFUDaVdiAlYds6M-v6OuDcmr2ckKeZfMTR7pHsLY0IIdQEnGFxbVDSdf25CxJLGtrUg-DQr_SWE0UPmO_W-bU7WpBow54zpFrBsE_1d_LHO9P4OSaj7ld7A" alt /></p>
<ul>
<li>Ingress means input bounds rule</li>
</ul>
<ul>
<li><p>Group id is what we created just now.</p>
</li>
<li><p>Protocol tcp which we will be using</p>
</li>
<li><p>Port has to define the range 0-65535</p>
</li>
<li><p>Define the IP address 0.0.0.0/0</p>
</li>
<li><p>To perform the next step we will need the Image Id (<em>a specific image we will be using for our instance. For this demo we will be using Ubuntu Linux LTS 20</em>).</p>
</li>
</ul>
<p><code>aws ec2 run-instances --image-id ami-0b93ce03dcbcb10f6 --count 2 --instance-type t2.micro --key-name awsclikey --security-group-ids sg-0b2ba2fe2deb772b6 --subnet-id subnet-013a2aa48b12f80b2</code></p>
<p><img src="https://lh4.googleusercontent.com/HIyzzjhliRiFFisR1Omt52dtRkbDNNPd-D-i8oyEzBlOPmd5sHsGkDt6XYaVdRXcusdNxOGdw1BV6yf6BFbu00NfyMMv84hG3tLkW-W3Qv09pP1-LwWXa1LaY_0VxBcDMeHKyjZyiccQwTr6mT7uJNBgO3bBP9lEoFgd1I2eb94F9Dcp5VogwrGLVL73Bw" alt /></p>
<ul>
<li>The EC2 instance will be ready shortly.</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[AWS IAM (Identity Access Management)]]></title><description><![CDATA[IAM (Identity Access Management) is a single AWS account that lets Root users manage all the users in the environment or the team. IAM helps with authorization and Authentication Access for the team. We can have different permissions and different gr...]]></description><link>https://blog.automation-dev.us/aws-iam-identity-access-management</link><guid isPermaLink="true">https://blog.automation-dev.us/aws-iam-identity-access-management</guid><category><![CDATA[AWS IAM]]></category><category><![CDATA[aws iam policies]]></category><category><![CDATA[IAM]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Mon, 03 Apr 2023 01:11:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/w7ZyuGYNpRQ/upload/e24aeddf9634b954d9fb889c41179ebc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>IAM (Identity Access Management) is a single AWS account that lets Root users manage all the users in the environment or the team. IAM helps with authorization and Authentication Access for the team. We can have different permissions and different groups with permission to manage the users.</p>
<p><em>Note:</em> <strong><em><mark>The user can create an IAM account with the same email as the one used to log in to the AWS Console Account. The user must switch from Root to IAM at the login window.</mark></em></strong></p>
<p><img src="https://lh4.googleusercontent.com/13COXjxDm81Wao6loriZSt86qeBZBCBc4AwI4g_vb6Sizcbc_r9ayu1lBbPKUhmpaGjCYorem--ficRsngARFyuQBGN3BIuIviiztvJS5E-Un0z48Imz14U_KBu3tjo4Pcwq9I1ubjHNJe2NkgK5BWQ3lrtO_bDIXKapzY7sImh24uRhcG8bkCe1TaE5Lg" alt /></p>
<ul>
<li><p>Once Logged in Navigate to the Search bar and Type <strong>IAM</strong> or <strong>Identity Access Management</strong></p>
</li>
<li><p>The user will be presented with the following dashboard, where users, groups, policies and etc can be managed.</p>
</li>
</ul>
<p><img src="https://lh4.googleusercontent.com/ljDoQkg8rjJct5nf31mnBAM7JIYL8TBzokDR7_dmX8f2lnqe1NzS03Obbo8zONyX0Z1f5fftDs3fFkvgRSoASrttVL7vkeKgNnEfwSyXngzToWl5q7woJ3iQwfSRZ7OLw4PVYj5MkubE__dP9DPhTaYom-6G337l46lieJVZB0e47VNv-I57pT_96CggrQ" alt /></p>
<h2 id="heading-how-to-create-users-in-iam">How to create Users in IAM</h2>
<p>Users are known as entities, the purpose of creating users is to give the team or the environment access to the services in AWS. We can manage the users with the help of IAM.</p>
<ul>
<li><p>Click <strong>Users</strong> under <strong>Access management</strong></p>
</li>
<li><p>Click on <strong>Add Users</strong></p>
</li>
</ul>
<p><img src="https://lh5.googleusercontent.com/YjCtyquoIEf2JZDorVDYSXo4Bko6FOjV77D3nnmDk2oVNLhOJm_MNjjMZ5thxDeyg5j6Kf1d7rWZ0nRJUYzVjuHGMsOj-4SixUC_ZikqT_-crYcOXuBOMOB1p7stBF8bbOL0XoqM_ImXGezigmaFiyROJMmintG1zQPSn2rGYoLzY4QZ4VlyooZWscI66w" alt /></p>
<ul>
<li>Add <strong>user name</strong>, under <strong>Set user details.</strong></li>
</ul>
<p><strong><em>(U1 is for demonstration purposes)</em></strong></p>
<ul>
<li><p>The user will have two ways to provide the access to the U1,</p>
</li>
<li><p>Access Key - Programmatic Access (<strong><em>User will connect using CLI, SDK, and other development tools to connect</em></strong>)</p>
</li>
<li><p>Password - AWS Management Console Access (<strong><em>users will log in using AWS web console</em></strong>)</p>
</li>
<li><p>Let's use <strong>Password - AWS Management console access.</strong></p>
</li>
</ul>
<p><img src="https://lh6.googleusercontent.com/fYGqkBaiW0l1a7BLJDDKCbYCYudurV6T_O3juQc6aGGOOhRyu8_Zmgn0njmxPt-LPV_8iQaOPGFLhNZFukwvjaKgsjdI6qZIBfH28Y9SzzHWQw4X1B2WV2yQupFTRHiqlFQDQm2QZoVP6IcN5BnCrAyIVPUWmGj8edNrZhoR_e92rr4sVckZdEaaDIby-A" alt /></p>
<ul>
<li><p>We can Add a user to the Group here or create a group for the user if it doesn't already exist.</p>
</li>
<li><p>We can copy permissions from the existing user</p>
</li>
<li><p>We can attach existing policies here as well</p>
</li>
<li><p>We will just create a user at this time.</p>
</li>
<li><p>Click on <strong>Next: Tags Button</strong></p>
</li>
</ul>
<p><img src="https://lh4.googleusercontent.com/YrIeOfBEtIEl9tDjUhls2hkffzISCNR1RDrHLtG5C5_U3xFw1XbCJpJAKSbvCldabYnMk-xkvL_CnkbakLzd5gxXhbDRbFPorISOI-ZKuCFlgk0-3CCpsqZBtZFM0pzZOfRoEpp4u4jmVMaqUHwNEfhx7lwZxsuw5HaUI2yyAe4p17WyzQvpRXdv6cUphw" alt /></p>
<ul>
<li><p><strong>Add tags</strong> is optional and it is used to organize, track and control access for the user. It could be the user's email, description, or job title.</p>
</li>
<li><p>Click on the <strong>Next: Review</strong> button, to continue</p>
</li>
</ul>
<p><img src="https://lh5.googleusercontent.com/54RCE0-2GRu7I25i9IqPCNa43EtvWcgeudhIB4mhTwQUHmGNEaTJXOW2juGKByXefk6LucX7gkqPf4dkXhrEyZFY9W3K14K6ClJ9GcNSBEZpnLcd1thTA_GVVuYWIEvbkglhcGWg0qqF2ezpfrKp-USp1lWZFvrKPLq8L_64jto5Px8_BKAXLaIrkYZAoQ" alt /></p>
<ul>
<li>The user should Review all the inputted information and Click Create User.</li>
</ul>
<p><img src="https://lh3.googleusercontent.com/QUVNSYyRWo3jX4wCjWDTCopll8m5K3IVMGZWIOm8LvnIaHkEHEMUucm2rSHMZSviMJdQr_ecdCgOFOyTMT_xO5frZXObLqqqaSKhfUFvqc-Y--ptspZt3wi0U6-kHaQzud40tIL6I3_bi9QdsDPeq7sZITQ9vQa0D5OACdwmZXYRoZ48JvxKt0JPZLD_qg" alt /></p>
<ul>
<li>A user account will be created, You can either download the CSV and share it with the users successfully or Email the CSV file to the user.</li>
</ul>
<h1 id="heading-creating-groups-in-iam">Creating Groups in IAM</h1>
<p>A Group is known as an identity that has all the IAM</p>
<p>users in a single environment. The groups are used to specify the permissions for multiple users. The groups make permission to manage and apply to multiple users.</p>
<ul>
<li><p>To Create Groups Navigate to the left panel, under Access Management.</p>
</li>
<li><p>Click on User groups.</p>
</li>
</ul>
<p><img src="https://lh5.googleusercontent.com/5gXI1lZaTw_PrbtmTr16pYmnSDx_uBvY-8RXwn7Ysc2Olp8QU_ZqPC77LdS8UWghzoXBoLr4vlSSg8xT7J46ihn3tifQFo2vNRiFOG_5yVLjQHEgSRr1BQ9rNwsffuE7AUyhdyAHbJMJnWUw0rdIJ95znB4AuAZBy8FDW6I4tz2juwW9O_GiAiJ3dKmNpw" alt /></p>
<ul>
<li><p>Give the group a specific name (ec2-access)</p>
</li>
<li><p>You can also add users to the group, but it doesn’t have to be now.</p>
</li>
</ul>
<p><img src="https://lh4.googleusercontent.com/2A1XHCBHtVt2d1YGdvxCzV9dk1P40X3l7cc2daqlIMtHOWTAlf_-LdnWoE3eJ5reNvkyQPQ_6p39Nef-H8U3uSz34CVIgGJxvGIwIRdhsPifrCtprWGeEO04aub_LzNg6WgA0XroPTNELrAr1E_kRELBY1NNa64P9HCRAHk_b6Dk8COguZbAZTT8L3nhSQ" alt /></p>
<ul>
<li><p>Under Attach permission policies, there are many permissions to choose from, we need to create a group for users to give only read-only permissions.</p>
</li>
<li><p>Select <strong>AmazonGlacierReadOnlyAccess</strong></p>
</li>
<li><p>Click <strong>Create Group</strong></p>
</li>
</ul>
<p><img src="https://lh5.googleusercontent.com/qI3eOnVCJjEqcqctDQhYDuxMcDFs3FzaFT9zxDTF9d8NXjg8d_X_RH-Zet9Sv4BO-rC4fScCk8WltTKwHfabG4ymDYWhwmCZKuIDQVhAcKkw82xl_Yce7Is259IyiUP71rDapAKlqqlZ3h2Ubv8DyuGtp6FSx7yVn81jZ6JXoDX6pyCBBGOylOeFzC8O6g" alt /></p>
<ul>
<li>The group ecs-access is created under the group with the defined permissions.</li>
</ul>
<p><img src="https://lh4.googleusercontent.com/rXVUTZ-HtDdXgdaQ-YSREx97P7YO87RuJ6dpnMaZk4o5ywwkP3Ab-UhI0_FbAITxou1kJdYK6-L39P2-mJGBxJpBqQEKgiYsmnFlYNp3rUjKlQ6TDyOj-pNq4OUUp4tpgiiy6kMVUnnKEWiUpaYS36GyoqZmNbFpKhWFAV8Q7HJwgTNuJxcBbmARpJfpjg" alt /></p>
]]></content:encoded></item><item><title><![CDATA[Sharing File/Folder with Aws S3]]></title><description><![CDATA[Go to your S3 bucket



Users can create folders in the S3 bucket and upload the files and folder in the S3 bucket



In order to share the document from S3, Click on a specific file to open it.


Navigate to the Objects Action Tab

Click on Share wi...]]></description><link>https://blog.automation-dev.us/sharing-filefolder-with-aws-s3</link><guid isPermaLink="true">https://blog.automation-dev.us/sharing-filefolder-with-aws-s3</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS s3]]></category><category><![CDATA[aws s3 versioning]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Mon, 03 Apr 2023 01:03:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1680483751295/a6333181-3163-4ca5-833f-ab10fada20d9.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<ul>
<li>Go to your S3 bucket</li>
</ul>
<p><img src="https://lh5.googleusercontent.com/dzmP052CV4HqI0zxNq2tBwMWjJPGK_3caYcal7WqdIEbqMLdUXzBOaiNQBN5FFCaavO_CyM3gdtq0FjMEcmWoQL-iT6gq7cAoO6rM1fqUMWu0OoTEIi81a6eKF2tpIjk8a_eo3aw3eT1HzoYT7zL29SppFrjeOdCEBDkEj0kLzYj1nA6zw1hGg4HYOTh" alt /></p>
<ul>
<li>Users can create folders in the S3 bucket and upload the files and folder in the S3 bucket</li>
</ul>
<p><img src="https://lh3.googleusercontent.com/KqGKybHeZqfjCZU86BFIz5D0196g9QFHR-jl-p1Smd-KZRZIWpCGodJPNESZJeDmD3kdkMrvR4KhiAM9wVRsjLB3KFPQcBcsKIdbjW8xbBdls8_P40pO_2T3DwqihRyCzRWPPwks2zQ8yMEAgFBU1ib8mW-anAOWVRfKuxSSYaVzwM8xaf3WXqzYFARI" alt /></p>
<ul>
<li>In order to share the document from S3, Click on a specific file to open it.</li>
</ul>
<ul>
<li><p>Navigate to the Objects Action Tab</p>
</li>
<li><p>Click on Share with a pre-signed URL</p>
</li>
</ul>
<p><img src="https://lh6.googleusercontent.com/yfnhVKFliyr-cEkQ318fp0RBr2W09dXOKuYcutR61N1OtrG4lynv9N25yTZ_5mTLKsXAgeU9fdLC3TM5TI5ejNkHXNObN4W1TdheyKuOdV-IzhOm1X3jvOFgoFEHuY0CfkeP2XWlqDTjWjmNdiAF6vpE29NRjFcApmb62b9otgTBuVzgJJhNgIycd4ii" alt /></p>
<ul>
<li>You can define if you want to share for minutes or hours.</li>
</ul>
<p><img src="https://lh5.googleusercontent.com/gjhPpC1VGA-qf_6KXUZ9xQBBFrDLqIzKbZm-j9jkS_aTjJiv66AKbGiFkQw_OuapSrRjH6FlQeZoTGSSUuosoGrnuYXnw34hxtlsQrZDYCTGETpyarjSTB8ZL-Uxdhfi-JHOsrLJBxTovbs23oPqVajRM_6eMH3Efx-Ko5JwgDspZWLwf9vwkimDSNjL" alt /></p>
<ul>
<li><p>After creating a pre-signed URL</p>
</li>
<li><p>Copy the URL with the intending party and do remind them the file/folder is available for a certain amount of time.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[AWS S3 Bucket]]></title><description><![CDATA[Aws S3 service is known as a simple storage service. S3 is also considered a bucket where important files and folders are stored. The files and folders we store in S3 are known as objects. The S3 default region will be Global, we can change it manual...]]></description><link>https://blog.automation-dev.us/aws-s3-bucket</link><guid isPermaLink="true">https://blog.automation-dev.us/aws-s3-bucket</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS s3]]></category><category><![CDATA[S3-bucket]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Mon, 03 Apr 2023 00:59:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1680483546368/6f57e0f6-3a14-494c-9fea-ece8500cd5ce.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Aws S3 service is known as a simple storage service. S3 is also considered a bucket where important files and folders are stored. The files and folders we store in S3 are known as objects. The S3 default region will be Global, we can change it manually to our particular region. </p>
<p>In the AWS console search for S3.</p>
<ul>
<li>Click Create Bucket</li>
</ul>
<p><img src="https://lh6.googleusercontent.com/3WOFfXY8C8DPaViHEiUT3xAEg4Q3lPQpshaaait6na8PUQY93mTxSmpOtXLcJWCzOZJCoYQYahFaKh_OyzDBIOzYZtNYZAcn_e1c0JNmjO1yiGhQ4W76Ivo1nX4K0pZIQrmRco1NpqinXu0mpLrTo5fStUc-tGmEZv9Q8QtHSOuHDkL9LZHP_qYnPoKQng" alt /></p>
<ul>
<li>Under General Configuration give your bucket a unique name.</li>
</ul>
<p><img src="https://lh3.googleusercontent.com/Cq-FXbtAUP5nExTqkwaVDffrnyS5Ihz6VQ6jAFSk_KZrRsKh42GnFyLxuTWpTGZSpFaJuHLTPVsAV4XYlOHlLWkqcfsiDLPW35emWZeAPaWUizCX0wNdtJYVuPozKXeOVNF4Q2WrD-W4PHrRm1bJhxufH6QYtpUkxP3mzB9YUFHJH5T-em4iZ49vzWxCow" alt /></p>
<ul>
<li>Leave the Object Ownership as the default</li>
</ul>
<p><img src="https://lh4.googleusercontent.com/H7BnMJDPU5mDMrsIP3dbAGaTtBer8VFh0pFoMUDuF01anMzB6vFHE9Tll0xQoWnxnxio3P9OdruMJJlB5VotDSs6Jv4GkhJ4H8byXfAdvA8DVfgj0UtjwzHlmYNBZtr1R8UgoVZh9qUECT77yX58B8m7maHqKIgar1o3NT3ZBVMUi24MbeJXQLurv3eRNg" alt /></p>
<ul>
<li>Under <strong>Block Public Access settings for this bucket,</strong> leave it as the default </li>
</ul>
<p><img src="https://lh5.googleusercontent.com/9cTPjYesbkrHZm4TojxcbWpLlyz-i63uEGLbwsxox8a8YHVUpUyL1Tp7jo-hH-RYKr9E5ScG_am1kgxLpdCVRSnwbRBdHtnLcsEt3gXIsxfaw9qNiXi2nKgUduMOfwn2C9RaWptr7XgUy3dmA7Y9CK_vcPy5UeZx-vB3zK4AjVo6YRrDGwBavtF7r15KoQ" alt /></p>
<ul>
<li>Leave <strong>Bucket Versioning</strong> as default too.</li>
</ul>
<p><img src="https://lh6.googleusercontent.com/_c8uHUKEZwicXodm4ZIjlYR7YfQSAz6wUC2KD1BkaTUI0jKVqFB7YbgH1z63GpQycHhuYnuSoHonAB6AxQR9CqJj1FAM5Z4rPSREP0XV_5quLXNqtgJ0OUbNKUkdu0XT-3mwL7pgwtQwO2_YBqMSSx5d4p4_Zk6fbc0W1W90pWG-wz7yJEK5mj1IYugM-Q" alt /></p>
<ul>
<li>Skip the <strong>Tags</strong> and leave the <strong>Default encryption</strong> as default too.</li>
</ul>
<p><img src="https://lh5.googleusercontent.com/y9kObV9goXtV7gSubjFleDyE5QNOBC4NxvT3GUpIUeklyzXVll1dp8OhWJ7upZkQOfVizSpgSLFnN89GGQBs_1RIJhjvwKytt8-Dmxuey_LrPJe68mW_OIxFCyqIuZfw272f41X7yN5ZSRwRZT2KGHHQVZrKWSYmw-ezaq7b5jrMu8KOBHa2iRN67--M6Q" alt /></p>
<ul>
<li>Don’t need to make any changes in Advance, Click on Create a bucket</li>
</ul>
<p><img src="https://lh6.googleusercontent.com/SFCmTcf9Rr4Imarhz0A_y040SfbOCR-d02WYS06oKS-E_5lTrFmsjMLE1Wopq1OHcK47rx507wz_TVOspvwzHZPRj-T7LuVY-xakieHcTuoBCA1KFmqcYy3qNogCgqc9nud6t2ArHVpKFOe3TcFQR1g9-YKNUBG1JMbyRsib5wlXR0t8p2VOXFYaITZOXA" alt /></p>
<ul>
<li>You should have your first AWS S3 bucket.</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[SSH Key Generating for Git hub Process]]></title><description><![CDATA[What is SSH
SSH is also known as Secure Shell Protocol, which allows sys admins or users a secure way to access their systems over the unsecured network. You need ad public key and a private key to connect to the security systems.
The public is a key...]]></description><link>https://blog.automation-dev.us/ssh-key-generating-for-git-hub-process</link><guid isPermaLink="true">https://blog.automation-dev.us/ssh-key-generating-for-git-hub-process</guid><category><![CDATA[GitHub]]></category><category><![CDATA[RSA]]></category><category><![CDATA[GitHub-Token]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Mon, 03 Apr 2023 00:54:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Xr1Lwph6eGI/upload/61c4033b5912332545136f71e1928e07.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What is SSH</p>
<p>SSH is also known as Secure Shell Protocol, which allows sys admins or users a secure way to access their systems over the unsecured network. You need ad public key and a private key to connect to the security systems.</p>
<p>The public is a key that will be used to decrypt the private key and usually gets saved in the secure credentialing manager on the user's system.</p>
<p>The private key is an encrypted key that the users will have it saved somewhere safe on their computers.</p>
<p>The process demonstrated here is done on Linux, if you follow along you won’t fail.</p>
<ul>
<li><p>Navigate to <code>cd ~/.ssh</code></p>
</li>
<li><p><code>ssh-keygen -o -t key(RSA) -C</code> "your GitHub email address"</p>
</li>
<li><p>If it asks you for a location just hit enter or return because by default ssh will look for the key here. <code>/home/devbox/.ssh</code></p>
</li>
<li><p>The location of the ssh key <code>cat id_key(rsa).pub</code></p>
</li>
<li><p>Copy and paste it into the git hub setting &gt; ssh key.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Basic Git & GitHub for DevOps Engineers]]></title><description><![CDATA[What is Git?
Git is a popular and widely used version control system that allows developers to keep track of changes made to their code over time, collaborate with others, and work on multiple versions of their code.
What is a GitHub?
GitHub is a web...]]></description><link>https://blog.automation-dev.us/basic-git-github-for-devops-engineers</link><guid isPermaLink="true">https://blog.automation-dev.us/basic-git-github-for-devops-engineers</guid><category><![CDATA[#90daysofdevops]]></category><category><![CDATA[#90daysofdevops chanllenge]]></category><category><![CDATA[#90daysofdevopschallenge]]></category><category><![CDATA[Devops]]></category><category><![CDATA[GitHub]]></category><dc:creator><![CDATA[Asad Rafique]]></dc:creator><pubDate>Sun, 26 Mar 2023 18:19:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/KPAQpJYzH0Y/upload/6040eafd8a2403266bdb1680873d77f5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-git">What is Git?</h2>
<p><code>Git</code> is a popular and widely used version control system that allows developers to keep track of changes made to their code over time, collaborate with others, and work on multiple versions of their code.</p>
<h2 id="heading-what-is-a-github">What is a GitHub?</h2>
<p><code>GitHub</code> is a web-based platform that provides hosting for Git repositories and offers additional features such as issue tracking, pull requests, and code reviews. It allows developers to share and collaborate on their code with others.</p>
<h2 id="heading-what-is-version-control">What is Version control?</h2>
<p><code>Version control</code> is a system that allows developers to keep track of changes made to their code over time, and enables them to revert to previous versions or collaborate with others on the same codebase.</p>
<h3 id="heading-types-of-version-controls">Types of Version Controls</h3>
<p>There are two main types of version control systems: <code>centralized</code> and <code>distributed</code>.</p>
<ul>
<li><code>Centralized version control</code> systems store code in a central server, while distributed version control systems create multiple copies of the repository on different computers.</li>
</ul>
<ul>
<li><code>Distributed version control</code> offers several advantages over centralized version control, such as the ability to work offline and make commits locally, easier collaboration and branching, and a faster workflow.</li>
</ul>
<h3 id="heading-everything-combined">Everything Combined</h3>
<p>Git is a version control system that allows developers to keep track of changes made to their code, GitHub is a platform that provides hosting for Git repositories and additional features, version control is a system that allows developers to manage and track changes made to their code over time, there are two main types of version control systems (centralized and distributed), and distributed version control offers several advantages over centralized version control.</p>
<h3 id="heading-10-basic-git-commands">10 Basic git Commands</h3>
<ol>
<li><p><code>git init</code>: Initializes a new Git repository in your current working directory.</p>
</li>
<li><p><code>git add</code>: Adds changes made to your code to the staging area, which prepares them to be committed.</p>
</li>
<li><p><code>git commit</code>: Commits changes to the repository and creates a new snapshot of the code.</p>
</li>
<li><p><code>git status</code>: Shows the current status of the repository, including which files have been modified or added, and which files are ready to be committed.</p>
</li>
<li><p><code>git log</code>: Shows a history of all the commits made to the repository, along with their messages and other details.</p>
</li>
<li><p><code>git branch</code>: Lists all the branches in the repository, and shows which branch you are currently on.</p>
</li>
<li><p><code>git checkout</code>: Switches to a different branch or a specific commit.</p>
</li>
<li><p><code>git pull</code>: Updates the local repository with changes made in the remote repository.</p>
</li>
<li><p><code>git push</code>: Sends changes made locally to the remote repository.</p>
</li>
<li><p><code>git clone</code>: Creates a copy of a remote repository on your local machine.</p>
</li>
</ol>
<p>Git has many more powerful features and commands that can help you manage your code and collaborate with others.</p>
<h3 id="heading-task-1-new-repository-in-github-and-cloning-it-locally">Task 1: New Repository in Github and Cloning it locally.</h3>
<p>A GitHub account already exists. I created the repository Day-8-task on git. I will clone the repo and make changes to the README.md file and commit the changes to GitHub.</p>
<ul>
<li>To clone the repository <code>git clone</code> followed by the repository link</li>
</ul>
<p><code>git clone</code> <a target="_blank" href="https://github.com/Arafique458/Day-8-Task.git"><code>https://github.com/Arafique458/Day-8-Task.git</code></a></p>
<ul>
<li>To configure the credentials, I will be using a Secret Access token to connect to the repository.</li>
</ul>
<p><code>git config --global</code> <a target="_blank" href="http://user.name"><code>user.name</code></a> <code>"arafique458"</code></p>
<p><code>git config --global</code> <a target="_blank" href="http://user.email"><code>user.email</code></a> <code>"arafique458@gmail.com</code></p>
<ul>
<li>To access the cloned directory</li>
</ul>
<p><code>cd Day-8-task</code></p>
<ul>
<li>To make changes to the README.md file</li>
</ul>
<p><code>vim README.md</code></p>
<ul>
<li>After making changes check the status of the file modified.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679853787414/886d65a0-0f0b-449b-8935-0292ff11e7d4.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>To stage, the changes <code>git add .</code> (this will stage all the files in the directory or we can specify the file want to stage)</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679853927671/315283e2-2d2a-4811-aad6-b53c5b81b232.png" alt class="image--center mx-auto" /></p>
<ul>
<li>To commit the change we made to the README.md file <code>git commit -m "what were changes"</code></li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679854077422/ce0a6f8d-7432-423b-bf32-1724a0b6c0e3.png" alt class="image--center mx-auto" /></p>
<ul>
<li>To push the changes to the main repository in the GitHub <code>git push origin main</code> (I was prompted to enter credentials, I used the email address of the git hub and Access token)</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679854353568/d021d09e-7e04-4db3-9c7d-4e0138166d41.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-task-2-file-in-the-repository-changed-and-commit-was-made-in-the-repository-using-git">Task 2: File in the repository changed and commit was made in the repository using Git</h3>
<p><strong>GitHub repository before making a commit</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679854442102/41b24a52-99eb-4ef0-abe4-670379316d92.png" alt class="image--center mx-auto" /></p>
<p><strong>GitHub repository before making a commit</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679854505740/29ba22f8-f7cb-47fc-bae5-e1116a008a91.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-task-3-pushed-changed-commit-screenshot-of-github-repository">Task 3: Pushed Changed / Commit Screenshot of GitHub repository</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679854695714/0a9c569d-f0a1-4245-94e5-a7ee2723cd00.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item></channel></rss>