AWS ECS Fargate, Cloud & Containers, Part I

 

5C7429AC-888A-4F74-A3AA-DDCA030E2E66 2Let’s talk about Cloud and Containers working together, in this article we’ll see (in practice) how to run, scale and orchestrate containerized applications and clusters on Amazon AWS.

In the existent categories of Cloud Computing services, besides the most well-known and traditional ones (IaaS, PaaS, SaaS), like it or not, other namely categories are often being used as well (mainly commercially), like FaaS (Function as a Service) and CaaS (Container as a Service). In the CaaS category, we have orchestration, compute resources and containers-virtualization provided as Cloud services to users. And so, they are able to deploy and manage container applications and clusters. This cloud service category is still evolving, big players (AWS, Azure, Google, IBM) are working on that to come up with the best option as an offering.

Outside the Cloud we are already dealing awhile with some container orchestrators, (if you were not living at the moon this past four years) surely you have already heard (a lot) about Kubernetes (a.k.a. K8s), introduced by Google at mid-2014. Some of us might know and use, something like Red Hat Openshift as the Container Orchestrator. This thing is actually a “distribution” of Kubernetes with “steroids”. As the Red Hat itself stated on his blog: “Kubernetes is the Kernell, OpenShift is the Distribution“, check an excerpt below:


We package Kubernetes and include additional tooling as features that we find important and our users demand. Much as CoreOS and CentOS contain different sets of tooling, catering to different users, so it is the same with Kubernetes distributions. At Red Hat, we have focused on making available the tools that help make developers and operations teams successful. This is why, for example, we are including Istio as a technology preview in OpenShift now. We feel it is a tool many users may come to depend on, and thus should be included as table stakes in the base distribution


AWS ECS, brief concepts

AWS_Docker_WhaleNowadays, as we may have noticed, in the Cloud are emerging alternatives to orchestrate containerized applications. At Amazon Web Services,  one of the options is the ECS (Elastic Container Service). This service helps to run applications in containers in a highly available manner across multiple Availability Zones within a Region.

Worth to mention, it is already available the AWS EKS, the Amazon Kubernetes-as-a-service alternative, taking the power of K8s to AWS lands. For those that already trust K8s as your orchestrator, maybe the EKS is an alternative instead of ECS, both options present pros and cons. Actually, the AWS EKS might be a topic for another complete article, this time we’ll keep the focus on AWS ECS.

Before starts the hands-on, a brief explanation of the mainly AWS ECS components we will have to deal with and also the launch types we can opt. Feel free to go ahead and skip those definitions, later on, when working on the example, you can come back and take a look to know a little about each component being worked.

AWS ECS Components

  • ECS Cluster: this is a logical group of services, can contain services of both launch types EC2 and Fargate (we will see what is launch type in a minute). We could use clusters to separate environments, for example, development, staging, and production. Clusters are Region-specific.
  • ECS Task Definition: at here we specify the docker image, the required resources, and configurations related to launch the Task. Things like: the command the container should run when it is started, data volumes, ports that should be opened up for your application, docker networking mode, the launch type to use, all of this are configured during the Task Definition creation.
  • ECS Service: we use services to run and maintain the desired number of tasks of a specified Task Definition. It is the service that will work behind a Load Balancer (AWS ELB). This way the traffic will be distributed across all the running tasks.
  • ECS Task: that’s an instantiation of a Task Definition within a cluster. According to the desired number of tasks configured, a number of running instances will be created.

As we can see, the terms used by AWS ECS has certain correlations and similitudes with the most used terms in the “world of containers”. Let’s see them:

  • Task Definition: like a Dockerfile, it contains the blueprint of our container, known as image.
  • Task: like a running instance of a container image, like a Pod.
  • Service & Cluster: For those both terms, nothing new invented, basically they represent the same in AWS and outside of it.

Now, to finish the talk about concepts, let’s check something more specific in the AWS ECS environment, the Launch Types.

AWS ECS Launch Types

There are two launch types offered by AWS ECS, let’s see the difference:

  • EC2: In this launch type, simply, we have AWS EC2 instances running the ECS Container Agent and registered with an ECS cluster. At the AWS EC2 Marketplace, we have available some AMI ready to use with ECS, we will choose one of them in this launch type exercise.
  • Fargate: Here some “magic” happens, we deploy containers without provisioning EC2 Instances, we don’t need to provide servers and manage the underlying infrastructure, Fargate takes care of all this, reducing management overhead, we only focus on run containers.

Below we have an architecture diagram showing Tasks of the same defined ECS Service,  deployed in two distinct Availability Zones, on the left the EC2 and on the right the Fargate launch type. We can notice the main difference, while in the EC2 launch type we need the EC2 instances with ECS Agent Tasks installed (that, by the way, we have to manage – provisioning, autoscaling, etc.), meanwhile, with the Fargate launch type, each Task is automatically attached to an Elastic Network Interface (ENI – providing a private IP for the running Task), with all those details being managed by the ECS Fargate service itself.

Architecture_ECS_EC2_Fargate

Ok, that’s enough, let’s cut the talk and get our hands dirty with some lab practices.

AWS ECS, hands-on

aws_ecs_handondsIn our exercise, in order to deploy something containerized, we will use a dumb and very simple “Greeting Hello REST” docker image, available at Docker Hub. It is a Quarkus(1) image docker application, we will deploy it with AWS ECS, using both Launch Types EC2 and Fargate.


(1) If you didn’t play yet with Quarkus framework to build microservices, you certainly should take a look, it is worth to know it, at least. Quarkus uses the GraalVM/SubstrateVM to work with ahead-of-time compilation technology, to generate native images with executable binaries. The resulting program has faster startup time and a lower runtime memory overhead.
Actually, I’ve built this same application (exactly the same code!!!) in two different docker images, one using Quarkus and the other using the ordinary Spring Boot framework. The SpringBoot Container starts at an average time of 7 seconds, meanwhile, the Quarkus Container starts at 0.019 seconds (milliseconds)! This is a biiiiig difference when it comes to scale out your dockers or start it from a cold start (as it normally happens in a Serverless architecture). The truth is… Spring Boot has some levels of abstractions that it is not necessary anymore in the world of containers…   Why do we need a run-anywhere-application (Hotspot VM, bytecode, etc.) if we will run it always inside the same (Linux) container?!?  We delivery and deploy containers in this case, not EAR/WAR files anymore. That does not make too much sense, does it?
Of course, there always pros and cons. Well, that’s another topic, worth to write an article maybe. I will continue to play and comparing Quarkus vs. Spring Boot, and then try to write some conclusions in the future.

We will need, of course, an AWS account, the JQ tool (command-line JSON processor), and the AWS CLI locally installed and configured at the O.S. Bear in mind that the commands were tested in a Mac OS (Mojave) and Linux (Ubuntu) O.S., not on Windows. (Some commands will need syntax modifications to work on Windows, even if you use something like Git Bash).

AWS ECS Fargate Lab

Now that we have everything set, we can start. The first thing will be to create the Cluster.

Create the ECS Cluster

$ aws ecs create-cluster --cluster-name "quarkus"

We can check the ECS Cluster created using the AWS Console or the AWS CLI itself (aws ecs describe-clusters –cluster quarkus).

aws_cluster_created.png

Register the ECS Task Definition

We defined all the attributes of our Task Definition in a JSON file (task-definition.json). We use it as an input parameter to launch the command below:

$ aws ecs register-task-definition --cli-input-json file://task-definition.json

At the file, we found the defined parameters: docker image, memory, CPU, container port, host port, protocol, and a command to run at the container startup:

### File task-definition.json
{
  "family": "quarkus",
  "networkMode": "awsvpc",
  "requiresCompatibilities": [
    "FARGATE",
    "EC2"
  ],
  "cpu": "256",
  "memory": "512",
  "containerDefinitions": [
    {
      "name": "quarkus-app",
      "image": "ualter/quarkus-app:latest",
      "memoryReservation": 128,
      "cpu": 256,
      "memory": 512,
      "portMappings": [
        {
          "containerPort": 8080,
          "hostPort": 8080,
          "protocol": "tcp"
        }
      ],
      "command": [
        "./application",
        "-Dquarkus.http.host=0.0.0.0"
      ],
      "essential": true
    }
  ]
}

aws_task_definition_registered

The registry(repository) that the AWS will search for the Docker Image (ualter/quarkus-app:latest), in this case, is the public service provided by Docker Hub. It is possible to use other registries, including the AWS ECR (AWS Elastic Container Registry – a Docker container registry). Basically, we would have to change some parameters (Path Image, authentication, etc.) to specify properly the use of another docker registry.

Create the AWS ELB – An Application Load Balancer

Before creating the Service let’s first take care of the Load Balancer. We will create an Application Load Balancer (ALB) to have available a DNS Name (representing the Service URL), and naturally the distribution of the network traffic between all the running Tasks. Let’s go through some steps to set up our ELB.

Select a VPC

List the VPC’s available and choose one of them (when an AWS Account is created, a default VPC is created in all Regions, it’s not necessary to create a new one if you don’t want to). Choose one of the listed and save its VPC Id to an environment variable with the name VPC_ID with the following two commands:

$ aws ec2 describe-vpcs | jq '.Vpcs[] | (" --------> VPC Id....: " + .VpcId,.Tags,.CidrBlock)'
" --------> VPC Id....: vpc-0f8ca50ca7b51a5ac"
[
 {
  "Key": "Name",
  "Value": "Tasaciones"
 }
]
"10.0.0.0/16"
" --------> VPC Id....: vpc-27aeb7921"
[
 {
  "Key": "Name",
  "Value": ""
 }
]
"172.31.0.0/16"
### Choose one and save its id
$ VPC_ID=vpc-27aeb7921
Select Subnets of the VPC

With the command below we retrieve, and save to an environment variable, all the subnets available for the chosen VPC. This way we can guarantee that the requests will be spread across the Availability Zones (considering that at least two of them were created in different Availability Zones).

$ SUBNETS_IDS=$(aws ec2 describe-subnets --filters Name=vpc-id,Values=$VPC_ID | jq -r '.Subnets[].SubnetId' | tr '\n' ' ')
### Check out
$ echo $SUBSNETS_IDS
subnet-d47c3f94 subnet-0da467624 subnet-233e7e4e subnet-9646b000
Create a Security Group

Define our Security Group (a resource that controls the traffic, based on Port, Protocol and IP. Like a virtual firewall). We add our rules, giving permission for incoming communication through the ports 80 and 8080, TCP/HTTP protocol.

### Create the Security Group and save its ID
$ SG_ID=$(aws ec2 create-security-group  --description "quarkus-DMZ" --group-name "quarkus-DMZ" --vpc-id $VPC_ID | jq -r .GroupId)
### Create the Incoming rule for HTTP, ports 80 and 8080
$ aws ec2 authorize-security-group-ingress --group-id $SG_ID --ip-permissions IpProtocol=tcp,FromPort=8080,ToPort=8080,IpRanges=[{CidrIp=0.0.0.0/0}] IpProtocol=tcp,FromPort=80,ToPort=80,IpRanges=[{CidrIp=0.0.0.0/0}] IpProtocol=tcp,FromPort=8080,ToPort=8080,Ipv6Ranges=[{CidrIpv6=::/0}] IpProtocol=tcp,FromPort=80,ToPort=80,Ipv6Ranges=[{CidrIpv6=::/0}]
### Just a Tag, help identification on AWS Console
$ aws ec2 create-tags --resources $SG_ID --tags 'Key=Name,Value=Quarkus-EC2-Instance'
aws_securitygroup_created
Create the Application Load Balancer (ALB)
$ aws elbv2 create-load-balancer --name elb-quarkus --subnets $SUBNETS_IDS --security-groups $SG_ID

aws_elb_created

Create the Target Group

That will be the group of resources that our ALB will distribute the network traffic. We define Port: 8080, Protocol: HTTP. The health checks will go through the same port and protocol, with root path /. If necessary, this is configurable, the health check of the resources that are part of the group can be in a different path, protocol or port.

$ aws elbv2 create-target-group --name target-groups-quarkus --protocol HTTP --port 8080 --vpc-id $VPC_ID --target-type ip

aws_targetgroup_created

At this moment, if we check the Targets tab of our Target Group, we will see nothing. There’s any resource to receive traffic. That’s fine, for now.

aws_targetgroup_targets_empty

Let’s move on and see later the same screenshot.

Create the ALB Listener

Now that we have created the ALB and the Target Group, the next step is to put them to work together. We do this by registering/add the Target Group as a Listener of our ALB. The Load Balancer uses the Listener rules to route the requests to targets (inside the Target Group).

### Save the ARN of our Load Balancer to a variable
$ ELB=$(aws elbv2 describe-load-balancers | jq -r '.LoadBalancers[] | select( .LoadBalancerName | contains("quarkus")) | .LoadBalancerArn')
### Save the ARN of our Target Group to a variable
$ TARGET_GROUP=$(aws elbv2 describe-target-groups | jq -r '.TargetGroups[] | select( .TargetGroupName | contains("quarkus")) | .TargetGroupArn')
### Create the Listener (Add TG to ALB)
$ aws elbv2 create-listener --load-balancer-arn $ELB --protocol HTTP --port 8080 --default-actions Type=forward,TargetGroupArn=$TARGET_GROUP

aws_elb_listener_created

Ok, we’re done here, now let’s get back to AWS ECS and create the Service.

AWS ECS Service

We need to specify the VPC Subnets where the Tasks of our Service can be hosted. Let’s get their IDs in the required format and save to a variable.

### Get and Save the Subnets Ids for the Service creation
$ SUBNETS_SERVICE=$(aws ec2 describe-subnets --filters Name=vpc-id,Values=$VPC_ID | jq '.Subnets[].SubnetId' | tr '\n' ',' | sed 's/.$//')
### Check them
$ echo $SUBNETS_SERVICE
"subnet-df7c3f94","subnet-0c467624","subnet-144e7e4e","subnet-7996b000"

We will use a JSON file to specify all the entry parameters to create our Service. Below we can check it and see some “variables” (PlaceHolders) on its content ($TARGET_GROUP, $SUBNETS_SERVICE). This file is a template, we need to replace the variables and create a valid JSON file before to execute the command.

Let’s highlight some of the parameters we choose in our Service:

  • Desired Count: the number of running instances that this service should have. We start with 1.
  • Launch Type: Specify that we want to use Fargate as the launch type.
  • Target Group ARN: In the Load Balancer section, we specify the Target Group we have created (this is “the link” with our ALB), also the container name and port.
{
    "cluster": "quarkus",
    "serviceName": "service-quarkus",
    "taskDefinition": "quarkus",
    "loadBalancers": [
        {
            "targetGroupArn": "$TARGET_GROUP",
            "containerName": "quarkus-app",
            "containerPort": 8080
        }
    ],
    "networkConfiguration": {
        "awsvpcConfiguration": {
          "subnets": [$SUBNETS_SERVICE],
          "securityGroups": ["$SG_ID"],
          "assignPublicIp": "ENABLED"
        }
    },
    "desiredCount": 1,
    "launchType": "FARGATE"
}

Replacing the variables:

$ cat service-definition-fargate-template.json | sed 's~$TARGET_GROUP~'"$TARGET_GROUP"'~' | sed 's~$SUBNETS_SERVICE~'"$SUBNETS_SERVICE"'~' | sed 's~$SG_ID~'"$SG_ID"'~' > service-definition-fargate.json

An then, create the ECS Service:

$ aws ecs create-service --cli-input-json file://service-definition-fargate.json

Right after we run this command (and the ECS Service be created correctly), we can take another look to our ECS Cluster dashboard, and check that now we have one Service.

aws_cluster_with_services

Notice that we already have one running task, as we specify the Desired Count parameter as 1 when creating the ECS Service. Below we can see the screenshot of the  list of the ECS Services in our Cluster, and some of their attributes (Running and Desired Tasks, Status, etc.):

aws_service_created

Notice that we have a Tab called ECS Instances, in this case, we will find nothing there, it will be empty (no EC2 Instances), even though we have our Service healthy, active and running. This is because the Launch Type of our ECS Service was configured with Fargate, it does not need the provisioning EC2 instances to work.

Below, the details our ECS Service:

aws_service_created_details

If we take a look at the Tasks Tab, we’ll find more information about the running tasks, we have available some basic metrics like CPU and Memory Utilization at the Metrics Tab. We are also able to Auto Scaling (control the minimum, desired and maximum number of running tasks) according to some metrics observation/alarm (CPU, Memory, Counts per Target).

Now let’s take a look again at our ALB. Do you remember that the Target Groups (associated with the ALB) didn’t have any targets registered? Now, let’s take another screenshot of the targets:

aws_targetgroup_targets_withTargets

Now we have targets. At the moment we create a Fargate ECS Service (with Desired Count of 1) associate with this Target Group, a Target arose. The ECS Fargate service took care of this for us, create an Elastic Network Interface and associate with the Task.

See what is happening here? Using Fargate you don’t have to deal with server instances (EC2), provisioning, scaling them out and in, etc. this is already managed. You take care of the Service and its running tasks.

Testing AWS ECS Service

We are going to use the ALB DNS Name to reach the deployed Service, let’s save it in an environment variable (just for info: if this Service must be made public to the world with a controlled and registered DNS Name, we could use the AWS Route 53 to create an ALIAS register for our ALB DNS Name).

### Save the Load Balancer URL to a environment variable (DNS Name)
$ ELB_URL=$(aws elbv2 describe-load-balancers | jq -r '.LoadBalancers[] | select( .LoadBalancerName | contains("quarkus")) | .DNSName')
### and, show me...
$ echo $ELB_URL
elb-quarkus-24625403.us-west-2.elb.amazonaws.com

Now, the first call of our ECS Service:

$ curl -vw "\n\n" http://$ELB_URL:8080/hello/greeting/Visitor
> GET /hello/greeting/Visitor HTTP/1.1
> Host: elb-quarkus-159762543267.us-west-2.elb.amazonaws.com:8080
> User-Agent: curl/7.64.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Wed, 07 Aug 2019 13:28:42 GMT
< Content-Type: text/plain;charset=UTF-8
< Content-Length: 56
< Connection: keep-alive
< 
Hello from Quarkus, Visitor

* Connection #0 to host elb-quarkus-159762543267.us-west-2.elb.amazonaws.com left intact
*Server_IP --> 172.31.44.13

See the Server_IP value at the response?  This is the Private IP of the ENI created we have seen before, attached to our single running task.

Scaling Out

Changing the configuration of our ECS Service, asking for more quantity of running tasks, modifying the desired count parameter, from 1 to 3.

$ aws ecs update-service --service service-quarkus --desired-count 3 --cluster quarkus

And then, after a few seconds, we will find our ECS Service this way:

aws_service_scaledOut_details

And now, checking the ALB Target Group after scaling out:

aws_targetgroup_targets_afterScalingOut

We still have the previous IP 172.31.44.13 (of the first running task), and more two new were provisioned. Each one of them is attached to a running instance task of our ECS Service. Notice that they are all located at different Availability Zones (us-west-2a, us-west-2c, us-west2d), and so each instance task is running at a different DataCenter of the same AWS Region, providing to our Service high availability.

Firing requests to our Service using the CURL command, now we will randomly be answered by those differents IPs created, that is, from different instances of our service running tasks.

ualter@osboxes:~/$ curl -w "\n\n" http://$ELB_URL:8080/hello/greeting/Visitor
Hello from Quarkus, Visitor

*Server_IP --> 172.31.9.130

ualter@osboxes:~/$ curl -w "\n\n" http://$ELB_URL:8080/hello/greeting/Visitor
Hello from Quarkus, Visitor

*Server_IP --> 172.31.48.51

ualter@osboxes:~/$ curl -w "\n\n" http://$ELB_URL:8080/hello/greeting/Visitor
Hello from Quarkus, Visitor

*Server_IP --> 172.31.9.130

ualter@osboxes:~/$ curl -w "\n\n" http://$ELB_URL:8080/hello/greeting/Visitor
Hello from Quarkus, Visitor

*Server_IP --> 172.31.44.13

Bear in mind, we created an internet-facing AWS ALB, the DNS Name of our Load Balancer is publicly available. We can access it throughout a browser as well, using the ALB DNS Name (previously saved in the $ELB_URL environment variable, it will look something like this URL http://elb-quarkus-1096396912.us-west-2.elb.amazonaws.com:8080/hello/greeting/Visitor).

We can use a combination of two simple command-line tools to perform a quick loading test, Apache Benchmark (to perform the loading test), and Gnuplot (to generate Graphs based on the measured values).

Firing 500 requests with multiple concurrencies, 5 requests performing at a time:

### Perform the Test with Apache Benchmark (500 requests, by 5)
$ ab -n 500 -c 5 -g data.tsv "http://$ELB_URL:8080/hello/greeting/Visitor/"
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking elb-quarkus-14543565903.us-west-2.elb.amazonaws.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests

Server Software: 
Server Hostname: elb-quarkus-14543565903.us-west-2.elb.amazonaws.com
Server Port: 8080

Document Path: /hello/greeting/Visitor/
Document Length: 56 bytes

Concurrency Level: 5
Time taken for tests: 38.213 seconds
Complete requests: 500
Failed requests: 0
Total transferred: 95500 bytes
HTML transferred: 28000 bytes
Requests per second: 13.08 [#/sec] (mean)
Time per request: 382.128 [ms] (mean)
Time per request: 76.426 [ms] (mean, across all concurrent requests)
Transfer rate: 2.44 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 178 186 9.3 183 250
Processing: 180 192 9.2 188 253
Waiting: 180 191 9.2 188 253
Total: 361 377 14.9 372 460

Percentage of the requests served within a certain time (ms)
50% 372
66% 377
75% 381
80% 384
90% 392
95% 406
98% 433
99% 446
100% 460 (longest request)

The option -g we use with the Apache Benchmark command ran above, generated a file with all the measured values prepared for the Gnuplot tool. In order to run it, we need to configure some commands and parameters like the type of output (PNG Image), the resolution size of the PNG image,  graphs properties like titles, labels, etc.  We put all this into a file and pass it as an entry parameter to the Gnuplot. You can find the file here: apache-benchmark.p

### Run Gnuplot, generating the Graph 
$ gnuplot -e "TITLE='Requests'" -e "LINE='Response Time'" -e "PLOT=4" apache-benchmark.p

### Open the Graph Image at Linux Ubuntu
$ eog benchmark-4.png

#### OR

### Open the Graph Image at Mac OS
$ open benchmark-4.png

And here (with only two command lines executed) we have something visual as a result of our simple loading test.

benchmark

Of course, we can also take the advantages of some services offered by the Cloud provider environment. In this case, we have available to our Load Balancer (ALB) some simple metrics, automatically generated by the AWS CloudWatch (below the Count of Requests).

cloudWatch_monitoring_countRequest

Clean Up

Mainly if you are not eligible for Free Tier (the ability to explore the AWS account for 12-months since its creation, free of charge up to specified limits for each service), it is important to remove the resources we have created, let’s clean them up, every one of the AWS resources. It could cost some money if we leave it there.

### Clean Service
$ aws ecs update-service --service service-quarkus --desired-count 0 --cluster quarkus
### Wait a litte before every running task be delete, and then clean the service:
$ aws ecs delete-service --cluster "quarkus" --service "service-quarkus"

### Clean TaskDefinition
### (In case more revision/version, clean each one of we have register)
$ for i in {1..5}; do aws ecs deregister-task-definition --task-definition quarkus:$i ; done

### Clean Cluster
$ aws ecs delete-cluster --cluster quarkus

### Clean ELB
$ ELB=$(aws elbv2 describe-load-balancers | jq -r '.LoadBalancers[] | select( .LoadBalancerName | contains("quarkus")) | .LoadBalancerArn')
$ aws elbv2 delete-load-balancer --load-balancer-arn $ELB

### Clean the Target Group
$ TARGET_GROUP=$(aws elbv2 describe-target-groups | jq -r '.TargetGroups[] | select( .TargetGroupName | contains("quarkus")) | .TargetGroupArn')
$ aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP

### Clean the Security Group
$ aws ec2 delete-security-group --group-id $SG_ID

 

Conclusion

In this article, we have tried one of the options of AWS services to work with containerized applications, the AWS ECS, using specifically the Fargate launch type. In the myriad of options offered by the market as “the salvation” of your problems and fulfillment of the needed requirements to work with Microservices, it could be another option to take into account. One of the clear advantages might be the level of integration with other AWS services if you are already involved in this.

We still have some job to do (we didn’t see the AWS ECS working with EC2 Launch Type), but let’s have a break here and split this in two. At part II we will try out the EC2 Launch Type and see the difference with Fargate.

Signing

 

 

 

 

 

 

 

 

One thought on “AWS ECS Fargate, Cloud & Containers, Part I

Comments are closed.