Encrypt a file using openssl

Tags

Storing sensitive information (such as passwords, certain configurations, etc.) in plain text is a bad practice. However, there are many tools available to remedy this, such as openssl and gpg.

openssl is usually used to generate SSL certificates, but you can also use it to encrypt files. Below is a simple example of encrypting and decrypting a file.

  1. Creating a simple file.

    $ echo "This is sensitive info! You should store it safely." > info.config
  2. Encrypt the file using aes-256-cbc algorithm.

    $ openssl enc -aes-256-cbc -in info.config -out info.config.data
    enter aes-256-cbc encryption password: {password}
    Verifying - enter aes-256-cbc encryption password: {password}
    
  3. Decrypt the file:
    $ openssl enc -aes-256-cbc -d -in info.config.data > info.config.out
    enter aes-256-cbc decryption password: {password}
    
  4. Verify the MD5 checksum to make sure it is the same file.
    $ md5sum info.config
    c640d4578cd8445727c790f419d01b1c  info.config
    $ md5sum info.config.out
    c640d4578cd8445727c790f419d01b1c  info.config.out
    

Java vs. Python

Tags

,

There is a great article (which I recommend reading) comparing Java and Python written by AppDynamics. Below is a short summary of it:

Python:

  1. Python is the older language – released in 1991 by Guido van Rossum.
  2. It is designed for readability and simplicity.
  3. Reference implementation is in C – CPython.
  4. Global Interpreter Lock (GIL) in CPython prevents scaling.

Java:

  1. Java was released in 1995 by James Gosling and others at Sun Microsystems.
  2. Designed to be portable and efficient.
  3. Compiled language.
  4. Runs on Java Virtual Machine.
  5. A lot of Java efficiency comes from optimizations to the virtual machine execution. JVM can translate bytecode to machine code as the program executes – Just-In-Time (JIT) compilation.

Setting up cross account access on AWS

Tags

Production and development services are typically separated into separate AWS accounts. You might have a scenario where you need to access an AWS service(s) (e.g. S3 or DynamoDB) in production from development. In order to do so, you need to setup cross account access via IAM roles.

Below are steps to create the IAM roles in development and production to access S3 and DynamoDB.

In your development AWS account:

  1. Create an IAM role – dev-role.
  2. Configure the trusted relationships to allow EC2 instances to assume this role as such:
    {
      "Version": "2012-10-17",
      "Statement": [
      {
        "Effect": "Allow",
        "Principal": {
          "Service": "ec2.amazonaws.com"
        },
        "Action": "sts:AssumeRole"
      }
      ]
    }
    
  3. Create an inline policy for your role as such:
    {
      "Version": "2012-10-17",
      "Statement": [
      {
        "Effect": "Allow",
        "Action": "sts:AssumeRole",
        "Resource": "arn:aws:iam::{prod-account-id}:role/prod-assume-role"
      },
      {
        "Effect": "Allow",
        "Action": "iam:PassRole",
        "Resource": "arn:aws:iam::{prod-account-id}:role/prod-assume-role"
      },
      {
        "Effect": "Allow",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::prod-bucket"
      },
      {
        "Effect": "Allow",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::prod-bucket/*"
      },
      {
        "Effect": "Allow",
        "Action": "dynamodb:*",
        "Resource": "arn:aws:dynamodb:eu-west-1:{prod-account-id}:table/prod-table"
      }
      ]
    }
    

    You might have not have created the IAM role arn:aws:iam::{prod-account-id}:role/prod-assume-role yet, but you can put it in the policy as a placeholder (until you create it below).

  4. Once it is created, take note of the IAM role ARN (e.g. arn:aws:iam::{dev-account-id}:role/dev-role).

In you production account:

  1. Create an IAM role – prod-assume-role.
  2. Configure the trusted relationships to include the IAM role you created above as such:
    {
      "Version": "2012-10-17",
      "Statement": [
      {
        "Effect": "Allow",
        "Principal": {
          "AWS": "arn:aws:iam::{dev-account-id}:root"
        },
        "Action": "sts:AssumeRole",
        "Condition": {}
      },
      {
        "Effect": "Allow",
        "Principal": {
          "AWS": "arn:aws:iam::{dev-account-id}:role/dev-role"
        },
        "Action": "sts:AssumeRole",
        "Condition": {}
      }
      ]
    }
    
  3. Create an inline policy for your role as such:
    {
      "Version": "2012-10-17",
      "Statement": [
      {
        "Effect": "Allow",
        "Action": "sts:AssumeRole",
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::prod-bucket"
      },
      {
        "Effect": "Allow",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::prod-bucket/*"
      },
      {
        "Effect": "Allow",
        "Action": "dynamodb:*",
        "Resource": "arn:aws:dynamodb:eu-west-1:{prod-account-id}:table/prod-table"
      }
      ]
    }
    
  4. Once it is created, take note of the IAM role ARN (e.g. arn:aws:iam::{prod-account-id}:role/prod-assume-role), since it will be used below to verify the cross account access.
  5. Update the S3 bucket policy to allow access for arn:aws:iam::{dev-account-id}:role/dev-role as such:
    {
      "Version": "2012-10-17",
      "Statement": [
      {
        "Effect": "Allow",
        "Principal": {
        "AWS": [
          "arn:aws:iam::{prod-account-id}:root",
          "arn:aws:iam::{dev-account-id}:role/dev-role"
        ]
        },
        "Action": "s3:*",
        "Resource": [
          "arn:aws:s3:::prod-bucket",
          "arn:aws:s3:::prod-bucket/*"
        ]
      }
      ]
    }
    

Assuming you have an EC2 instance with the arn:aws:iam::{dev-account-id}:role/dev-role role attached, lets verify the cross account access works. From within the EC2 instance, do the following:

$ pip install awscli --upgrade --user && export PATH=~/.local/bin:$PATH
$ aws sts assume-role --role-arn arn:aws:iam::{prod-account-id}:role/pipeline-assume-role --role-session-name cross-account
$ export AWS_DEFAULT_REGION=eu-west-1
$ export AWS_ACCESS_KEY_ID={AccessKeyId}
$ export AWS_SECRET_ACCESS_KEY={SecretAccessKey}
$ export AWS_SESSION_TOKEN={SessionToken}
$ aws sts get-caller-identity
$ aws dynamodb describe-table --table-name prod-table --region eu-west-1
$ aws s3 ls s3://prod-bucket

Replace {AccessKeyId}, {SecretAccessKey}, and {SessionToken} with the correct
values from the output of aws sts assume-role.

Upload a server certificate to AWS

Tags

,

It is good practice to protect your website using an SSL certificate. It secures the communication between your website and users.

Typically, you would install the SSL certificate on an EC2 instance and terminate it on the instance using nginx. But the same can be done on an AWS Elastic Load Balancer. This reduces load (decrypting) you would typically place on an EC2 instance.  To accomplish this, you need to:

  1. Request a certificate – using DigiCert or another certificate authority.
  2. Upload the certificate to your AWS account the upload-server-certificate command, then retrieve the ID of the uploaded certificate.
    $ aws iam upload-server-certificate --server-certificate-name my-site-cert --certificate-body file:///ssl-cert/my.site.com-cert --certificate-chain file:///ssl-cert/IntermediateCA.crt --private-key file:///ssl-cert/my.site.com-priv-key.pem
    
  3. Create a new load balancer which includes an HTTPS listener, and supply the certificate ID from the previous step.
  4. Configure a health check and associate EC2 instances with the load balancer.

Docker design patterns

Tags

,

Below are a few design patterns for Docker. It is an abbreviated version of docker-container-anti-patterns.

  1. Containers are ephemeral. Data or logs should be stored in volumes.
  2. Create services to tie containers together, such as the frontend (nginx)
    and backend (database). This provides basic load balancing. You can update
    your services and containers independently of each other.
  3. A Dockerfile uses CMD or ENTRYPOINT to perform some configuration and then
    start the container. Do not start multiple processes in that script. It makes
    updating your container much harder.
  4. docker exec” starts a new command in a running container. It is useful
    for attaching a shell (docker exec -it {id} bash).
  5. Your image should be lean. Create a directory and include a Dockerfile and
    anything relevant there. Use .dockerignore to remove any logs, source code, etc.
    before creating the images.
  6. Do not store security credentials in Dockerfile. They are in clear text and checked
    into a repository, making them vulnerable.
  7. Use tags when running a container. “latest” may not actually be the latest and
    instead be an older version.
  8. Do not run your containers as root. A compromised container can damage your
    underlying host.

Kubernetes basics

Tags

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure.

Kubernetes master runs kube-apiserver, kube-controller-manager and kube-scheduler.

Each individual non-master node runs kubelet and kube-proxy.

Basic Kubernetes objects include:

  • Pod: Group of one or more containers.
  • Service: Defines a logical set of Pods.
  • Volume: Just a directory, which is accessible to the containers in a pod.
  • Namespace: Virtual clusters backed by the same physical cluster.

Kubernetes contains a number of higher-level abstractions called Controllers.

  • ReplicaSet: Ensures that a specified number of pod “replicas” are running at any given time.
  • Deployment: Provides a way to update Pods and ReplicaSets.
  • StatefulSet: Provides a unique identity to its Pods.
  • DaemonSet: Ensures that all (or some) nodes run a copy of a pod.
  • Job: Creates one or more pods and ensures that a specified number of them run to completion.

Please refer to https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/ for more info.

Finding SSL protocols and ciphers using nmap

Tags

,

 

nmap is a network tool and security/port scanner.  You can use it to scan an endpoint and list all the SSL protocols and ciphers that are supported by that endpoint. Below is an example of scanning port 443 on Google. nmap is very helpful when it comes to debugging SSL exceptions, such as Caused by: javax.net.ssl.SSLException: Received fatal alert: handshake_failure.

$ nmap --script ssl-enum-ciphers -p 443 www.google.com

Starting Nmap 7.01 ( https://nmap.org ) at 2017-12-07 16:43 EST
Nmap scan report for www.google.com (172.217.7.228)
Host is up (0.030s latency).
Other addresses for www.google.com (not scanned): 2607:f8b0:4004:802::2004
rDNS record for 172.217.7.228: iad23s58-in-f4.1e100.net
PORT    STATE SERVICE
443/tcp open  https
| ssl-enum-ciphers: 
|   TLSv1.0: 
|     ciphers: 
|       TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
|     compressors: 
|       NULL
|     cipher preference: server
|   TLSv1.1: 
|     ciphers: 
|       TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
|     compressors: 
|       NULL
|     cipher preference: server
|   TLSv1.2: 
|     ciphers: 
|       TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
|       TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
|       TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
|       TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
|       TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
|     compressors: 
|       NULL
|     cipher preference: server
|_  least strength: C

Nmap done: 1 IP address (1 host up) scanned in 3.43 seconds

AWS Lambda

Tags

,

AWS Lambda is a simple and inexpensive way to run your code without thinking about the backend. Write your code in the language of your choice (Python, Java, Node.js, and a few others are supported) and upload it and its dependencies to AWS Lambda. Trigger when your code will run, either by an AWS event (e.g. new object in a S3 bucket) or have it timed to run periodically. The first million requests are free and only $0.20 per million requests afterwards.

When your Lambda function is invoked, a Docker container is created. That container may be reused on subsequent Lambda function calls. The container has a user defined limit on how long it can run (max of 5 minutes) and a memory footprint (max of 1.5G). The limits on AWS Lambda can be found at http://docs.aws.amazon.com/lambda/latest/dg/limits.html.

There are a variety of uses for AWS Lambda. It can be used (along with AWS API Gateway) to create a REST API. It can be used to react to published AWS events in S3, Kinesis, DynamoDB, and SNS. It can also be used to schedule events. You can find out the various use cases at http://docs.aws.amazon.com/lambda/latest/dg/use-cases.html.

Below are the steps to prepare your Python code for AWS Lambda.

  1. Install the dependencies within the source code directory.
    $ pip install -r requirements.txt -t .
  2. Zip up the contents of the source code directory.
    $ zip -r ../lambda_bundle.zip *
  3. Take note of the module where your function resides. You will use it (e.g. your_module.your_function) as the Lambda handler.

 

Deployment strategies

Tags

,

Applications are often built on multiple services (e.g. Redis, MySQL, gunicorn, nginx) and distributed across several machines to maintain high availability and resiliency.  When there is a new release, you might have to take some of those services down, so they could be updated.  This might cause down-time when a service is updated, but there are several ways to minimize or eliminate it.  Below are a few popular deployment strategies.

  1. Blue-Green
    • You operate two identical production environments (i.e. Blue and Green).
    • Only one environment is live (e.g. Blue).
    • A new version of the software is deployed in the not live environment (e.g. Green).
    • Once the Green environment is tested and ready, you can switch the router/load balancer so all requests forward to this environment instead of the Blue one.
  2. A-B
    • You operate two or more production environments at the same time.
    • The simplest form of A/B deployment is to divide traffic between two or more environments – 50% of visitors see variation A. 50% of visitors see variation B.
  3. Canary
    • You deploy changes to a small subset of servers, test it, and then roll out the changes to the rest of the servers.
    • Canary deployment serves as an early warning. If a canary deployment fails, the rest of the servers are not affected.

docker-compose

Tags

Compose is a tool for defining and running multiple Docker containers. You can start your application stack (defined in the Compose file) with a single command. Begin with:

  1. Creating a Dockerfile for your application.
  2. Defining the dependent services used by your application in a docker-compose.yml. Below is a sample application (i.e. foobar) that depends on Redis and MySQL.

    # Compose is a tool for defining and running multi-container Docker applications
    version: "2"
    services:
      gopher:
        # Use image that is built from Dockerfile in current directory
        build: .
        container_name: foobar
        
        # Forwards exposed port 80 on container to port 80 on host machine
        ports:
          - "80:80"
    
        # Containers for linked service will be reachable at a hostname identical to the alias
        # or the service name if no alias was specified
        links:
          - redis:redis
          - mysql:mysql
    
      mysql:
        image: "mysql:latest"
        container_name: mysql
        environment:
          # Password for root superuser account
          - MYSQL_ROOT_PASSWORD=supersecret
          # Create a new user and to set that user's password
          - MYSQL_USER=foobar
          - MYSQL_PASSWORD=secret
          # Name of a database to be created on image startup
          - MYSQL_DATABASE=foobar
        ports:
          - "3306:3306"
    
      redis:
        # Use public Redis image pulled from Docker Hub
        image: "redis:alpine"
        container_name: redis
        ports:
          - "6379:6379"
    
  3. Bringing up the stack with ‘docker-compose up’.

To install docker-compose:

# curl -L https://github.com/docker/compose/releases/download/1.12.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
# chmod +x /usr/local/bin/docker-compose

Below are typical docker-compose usage. For the complete CLI, see docker-compose CLI .

Bring up the application stack.

# docker-compose up

Bring down the application stack (with any images it built on startup).

# docker-compose down --rmi all