How Trace3 Accelerated JFrog Artifactory’s AWS Deployment

DEC 09, 2019

by Darren Boyd, Trace3

Trace3 is a consulting organization that helps enterprises move to the public cloud and build out their hybrid enterprise IT environment. JFrog Artifactory is an enterprise-grade universal artifact repository manager that hosts all of an organization’s binaries. JFrog Artifactory can be either a self-managed solution hosted in the user’s on-premises infrastructure, or a SaaS solution hosted and managed by JFrog on Google Cloud Platform, Microsoft Azure, or Amazon Web Services (AWS).

In order to make it easier for development and DevOps teams to deploy JFrog Artifactory Cloud on AWS, earlier this year the company made a strategic decision to prepare a Quick Start rapid deployment package. As part of that effort, they turned to Trace3, an experienced APN Advanced Consulting Partner certified as an AWS cloud practitioner, solutions architect (both associate and professional), DevOps engineer (professional), and advanced networking specialist.

This blog post describes how Trace3, in close cooperation with AWS, accelerated the deployment of a scalable, highly available cloud-native JFrog Artifactory architecture on AWS.

About the JFrog AWS Architecture

JFrog came to Trace3 with its own Artifactory architecture that is designed for infinite scalability, high availability (HA), seamless integration with the customer’s CI/CD ecosystem, and automated release pipelines. Trace3 advised JFrog on how these architectural design goals could be delivered via a Quick Start on the AWS platform using AWS’ powerful cloud-native services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), AWS Elastic Load Balancing (AWS ELB), and Amazon Relational Database Service (Amazon RDS).

Although JFrog Artifactory can also be deployed on Amazon Elastic Container Service (Amazon ECS), or Amazon Elastic Kubernetes Service (Amazon EKS), the default compute layer is Amazon EC2. The deployment (see Figure 1) creates two Amazon EC2 Auto Scaling groups, with one defined as the primary node and the other responsible for scaling secondaries up or down. High availability is achieved by spanning two Availability Zones. Other key features of the default deployment architecture include:

  • Configuring a Virtual Private Cloud (VPC) with public and private subnets in compliance with AWS best practices. Note, however, that you can customize the deployment template to point to an existing VPC.
  • The public subnets ensure secure egress and ingress traffic to and from the internet.
  • In addition to the two Amazon EC2 Auto Scaling groups described above, the private subnet also contains a MySQL on Amazon RDS instance for persistent storage of system parameters.
  • The Classic Load Balancer monitors the Auto Scaling groups. It also runs health checks to validate that the Artifactory service is available, triggering recovery of a new node within 10 minutes in the case of an error response from an endpoint.
Figure 1: Default reference architecture for Artifactory on AWS

Figure 1: Default reference architecture for Artifactory on AWS

About AWS Quick Starts

Quick Starts ease the deployment of third-party workloads on AWS. Each Quick Start is comprised of:

  • A reference deployment architecture (such as the one shown in Figure 1).
  • One or more AWS CloudFormation templates (JSON or YAML scripts) that automatically deploy and configure the required AWS compute, network, storage and other services.
  • A detailed deployment and customization guide.

A Quick Start uses Infrastructure as Code and other cloud-native methods to reduce hundreds of manual procedures to a few steps. Quick Starts also ensure that the deployed production workloads comply with AWS best practices for security and availability. In addition, the templates are easy to customize in order to smoothly integrate the deployed workloads into existing environments.

Last but not least, there is no cost for using a Quick Start. However, there will be costs related to the AWS resources and services used while deploying and then running the workload. AWS recommends enabling the AWS Cost and Usage Report to track ongoing costs associated with the Quick Start. Billing metrics are delivered to an S3 bucket in the user’s account, with running cost estimated based on usage throughout the month and finalized data at the end of the month.

Getting Started with JFrog Artifactory on AWS

This section describes how easy it is to get started with JFrog Artifactory, referring to the default deployment guide that uses Amazon Elastic Compute Cloud (Amazon EC2) as the compute layer.

A Few Prerequisites

Here are a few things you need to have in place before starting the deployment:

  • Enterprise or Enterprise+ license for Artifactory (although you can start with a free trial license).
  • An AWS account configured with some basic resources as outlined on p. 8 of the deployment guide, such as a load balancer, several compute instances, an RDS instance, an S3 bucket, and so on.
  • There should be at least one Amazon EC2 key pair in the region where the Quick Start is to be deployed.
  • An AWS Identity and Access Management (IAM) user with the required permissions for the resources and actions the templates will deploy. The AWS-defined Administrator Access role has all the necessary permissions.
  • An SSL certificate and certificate key.

Five Steps to Onboarding Artifactory on AWS

  1. Sign in to your (properly configured) AWS account.
  2. Add the Artifactory license keys to AWS Secrets Manager in the same region in which the Quick Start is to be deployed.
  3. In an editor of your choice, prepare the SSL certificate and key parameters by replacing new line endings with a | (pipe) character.
  4. In the AWS Management Console, launch the Quick Start by choosing the appropriate AWS CloudFormation template—either deploying the workload only into an existing AWS VPC or deploying both a new VPC and the workload.In either case, the deployment will take about 30 minutes to complete. If you are deploying to a new VPC, you will be prompted to fill in configuration parameters for security, JFrog Artifactory (licenses, certificates), and Amazon RDS database (password). If you are deploying to an existing VPC, you will also be prompted to fill in parameters for configuring the network.
  5. Connect to Artifactory and use its setup wizard for the initial configuration, including selecting the required repositories.

You are now ready to use JFrog Artifactory on AWS, although you will still have to complete the routine administrative tasks of configuring backups, maintenance operations, and authentication.

Updates

In the future, you may need to change certificate or network parameters, or update to a newer JFrog Artifactory version. All of these changes and updates are performed on the CloudFormation stack itself, and then you shut down the primary node. This prompts the load balancer to delete the current primary node and deploy a new one with the updated parameters or version. You then restart the secondary nodes one by one.

Get to Market Faster

Quick Starts are a great way to make your services and products easily available to the vast market of AWS users. Trace3 has the experience and expertise to help companies of all sizes and at all stages to accelerate their AWS onboarding process.

Contact us to explore how Trace3 can help your company jumpstart its time to the AWS market.

Leave a Reply

Your email address will not be published. Required fields are marked *