MacPro 1,1 Mod – Part 1

Way back in 2006 I had saved some money and splurged on a MacPro tower and a 30″ Cinema display. I lovingly upgraded over the years, including a flashed Radeon HD 4890, until eventually Apple planned the obsolescence of my beloved hardware. The MacPro 1,1 has a 32 bit EFI, so once El Capitan came along OS upgrades were no longer something which came for free. Fortunately, I am not afraid of getting my hands dirty so I was able to muck around with the Clover EFI boot loader and extend the life of my cherished cheesegrater for a few more years.

Eventually, I inherited a Lenovo ThinkServer TS140 which I was able to tinker into a functioning Hackintosh. I found the performance of the new hardware to be so much better, I stopped using the old standby entirely. My MacPro was sidelined, unplugged, tucked away gathering dust in a closet under the stairs.

I have held on to that MacPro for several years now, every once and a while rediscovering it while shuffling storage between seasons. Every time I was reminded of its existence, an ember of desire was rekindled to resurrect that case to new glory with modern hardware and an updated OS. Finally, a few weeks ago I decided to use my tax return and make fantasy a reality. Several blood and sweat soaked weekends later, I am currently writing this blog post using the results of my labor:

Before knowing how successful my build would be, I did not want to spend too much on the most premium hardware available, so I was fairly conservative. After some initial research, at first I thought I would be able to fit a regular size ATX board (like this fellow), so I ordered a Gigabyte Z390 Designare. I quickly realized that if I wanted to keep the hot-swap SATA slides and lose any hair performing the build, a regular sized ATX board was simply too big. So I ended up with the more modest version in mATX format, the Gigabyte Z390 M Gaming.

Some of the other core components I already had in the TS140: a Corsair RM850x PSU, an EVGA Geforce GTX 1060, and multiple hard drives with High Sierra already installed. I supplemented my old hardware with some new goodies: an Intel Core i7-8700K processor, 32GB of Ballistix Sport LT DDR4 RAM, an Intel 660p M.2 2280 2TB NVMe, and a Corsair H80i Liquid CPU Cooler just in case I want to over-clock.

Once I had all the bits and bobs, it was time to start gutting. At first it was really hard to rip out the carefully constructed innards of my precious MacPro, although once I got going it was easy. Once the main components were out, I had a fairly clean slate to work with:

As I said before I really wanted to preserve the hot-swap hard drive sleds, and after some research I decided to order replacements rather than a conversion harness which some folks have used. Corsair has some hot-swap mounts (found at the end of neilhart’s build thread over at tonymacx86) which are almost identical to the originals:

The connectors sit a little low for my build so I might have to adjust the sleds a little, although for now they are working great.

Once I was satisfied everything was going to fit, it was time to unplug the TS140 and strip out the components I would need for my new MacPro 1,1 build. At this point I was committed, pulled the rest of the wires and set to work with the dremel tool.

The first physical modifications to the case were to rip out the stand-offs which are under the footprint of the new mATX motherboard, cut them down to 1/2″ and then epoxy back onto the case in the correct placement. I took a lot of time making sure the orientation was correct and used two old graphics cards to make sure it was perfect:

Standoffs removed
Getting the placement right

Once the motherboard had a place to live, it was time to get the PSU into place and run some wires. I was not quite prepared to completely tear out the old PSU and replace with the innards from my RM850x (like this fellow), so my only alternative was to tear out the fan and steel separator of the top shelf of my MacPro. What an f’ing pain in the ass that was. Lost a lot of skin on my knuckles and finally after going through about 4 metal cutting dremel disks, I was able to pull it out:

GOT is on now, so I will have to continue this later with explanations of the images below…

Success!

AWS Lambda function to set Route53 DNS entries for Autoscaling lifecycle events

Some of our ECS Cluster machines need to have both public and private DNS entries So rather than update Route53 manually (super annoying), we modified the Lambda function we found here: https://objectpartners.com/2015/07/07/aws-tricks-updating-route53-dns-for-autoscalinggroup-using-lambda/ so that it works for both internal and external hosted zones. All you need to do to get this working is add an SNS Topic to the lifecycle events for an Autoscaling Group, and create the following Lambda and set it to subscribe to that SNS Topic.

There are two files for the Lambda, package.json

{
  "name": "lambda-route53-updater",
  "dependencies": { 
    "async": "latest",
    "aws-sdk": "latest"
  }
}

and the node function, index.js

/*
 Update Route53 Entries on Autoscale events with AWS Lambda.
 Code borrowed from https://objectpartners.com/2015/07/07/aws-tricks-updating-route53-dns-for-autoscalinggroup-using-lambda/
 */
 
 
var AWS = require('aws-sdk');
var async = require('async');
 
 
exports.handler = function (event, context) {
    var asgMsg = JSON.parse(event.Records[0].Sns.Message);
    var asgName = asgMsg.AutoScalingGroupName;
    var instanceId = asgMsg.EC2InstanceId;
    var asgEvent = asgMsg.Event;
 
    //console.log(asgEvent);
    if (asgEvent === "autoscaling:EC2_INSTANCE_LAUNCH" || asgEvent === "autoscaling:EC2_INSTANCE_TERMINATE") {
        console.log("Handling Launch/Terminate Event for " + asgName);
        var autoscaling = new AWS.AutoScaling({region: 'us-east-1'});
        var ec2 = new AWS.EC2({region: 'us-east-1'});
        var route53 = new AWS.Route53();
 
        async.waterfall([
            function describeTags(next) {
                console.log("Describing ASG Tags");
                autoscaling.describeTags({
                    Filters: [
                        {
                            Name: "auto-scaling-group",
                            Values: [
                                asgName
                            ]
                        },
                        {
                            Name: "key",
                            Values: ['DomainMeta']
                        }
                    ],
                    MaxRecords: 1
                }, next);
            },
            function processTags(response, next) {
                console.log("Processing ASG Tags");
                if (response.Tags.length == 0) {
                    next("ASG: " + asgName + " does not define Route53 DomainMeta tag.");
                }
                var tokens = response.Tags[0].Value.split(':');
                next(null, tokens[0], tokens[1], tokens[2]);
            },
            function handleEvent(hostedZoneId, zoneName, tagTokenName, next) {
                console.log("Processing Route53 records for zone " + hostedZoneId + " (" + zoneName + ")");
                var action = null;
                var fqdn = (tagTokenName || instanceId) + "." + zoneName + ".";
 
                if (asgEvent == "autoscaling:EC2_INSTANCE_LAUNCH") {
                    action = "UPSERT";
                    ec2.describeInstances({
                        DryRun: false,
                        InstanceIds: [instanceId]
                    }, function (err, data) {
                        next(err, action, hostedZoneId, fqdn, data);
                    });
                }
 
                if (asgEvent == "autoscaling:EC2_INSTANCE_TERMINATE") {
                    action = "DELETE";
                    route53.listResourceRecordSets(
                        {
                            HostedZoneId: hostedZoneId,
                            StartRecordName: fqdn
                        },
                        function (err, data) {
                            next(err, action, hostedZoneId, fqdn, data)
                        })
                }
            },
            function updateRecord(action, hostedZoneId, fqdn, awsResponse, next) {
                console.log("[" + action + "] record set for [" + fqdn + "]");
                var record,
                    fqdnParts = fqdn.split('.'),
                    lastFqdnPart = fqdnParts[fqdnParts.length - 2];
 
                if (action == "UPSERT") {
                    var recordValue = (lastFqdnPart == 'internal'
                            ? awsResponse.Reservations[0].Instances[0].NetworkInterfaces[0].PrivateIpAddress 
                            : awsResponse.Reservations[0].Instances[0].NetworkInterfaces[0].Association.PublicIp
                        ),
                        resourceRecords = [
                            {
                                Value: recordValue
                            }
                        ];
                    record = {
                        Name: fqdn,
                        Type: "A",
                        TTL: 10,
                        ResourceRecords: resourceRecords
                    }
                }
 
                // lambda's do not always execute in chronological order: (╯°□°)╯︵ ┻━┻
                // do not delete internal dns, only update
                if (action == "DELETE" && lastFqdnPart != 'internal') {
                    record = awsResponse.ResourceRecordSets.map(
                        function (recordSet) {
                            if (recordSet && recordSet.Name == fqdn) {
                                return recordSet
                            }
                        }
                    )[0];
                }
 
                if (typeof record === 'undefined') {
                    next('Unable to construct record, perhaps it was already deleted?')
                }
 
                var params = {
                    ChangeBatch: {
                        Changes: [
                            {
                                Action: action,
                                ResourceRecordSet: record
                            }
                        ]
                    },
                    HostedZoneId: hostedZoneId
                };
 
                console.log("Executing Route53 update: [ " + action + " ] " + fqdn);
                route53.changeResourceRecordSets(params, next)
 
            },
            function evaluteResponse(data, next) {
                if (data.ChangeInfo.Status == 'PENDING') {
                    console.log('Successfully updated DNS record id: ' + data.ChangeInfo.Id)
                    next()
                }
                else { next(data) }
            }
 
        ], function (err) {
            if (err) {
                console.error('Failed to process DNS updates for ASG event: ', err);
            } else {
                console.log("Successfully processed DNS updates for ASG event.");
            }
            context.done(err);
        })
    } else {
        console.log("Unsupported ASG event: " + asgName, asgEvent);
        context.done("Unsupported ASG event: " + asgName, asgEvent);
    }
};

Caveats… This Lambda requires that the EC2 instance have a tag named DomainMeta with the value containing colon delimited values for Hosted Zone ID, Zone Name and optionally sub domain name. Here is an example value: ABC1234DEFG:staging.internal:example which would result in the private IP of the EC2 instance being set to example.staging.internal in the Hosted Zone with ID ABC1234DEFG.

Also, all our internal Hosted Zones all end in .internal so any logic checking for the last part of the domain name “internal” will need to match your naming scheme.

AWS Cloudformation ECS Stack json config with Launch Config, Autoscaling Group and Load Balancer

A while back, at my company we switched to using Docker and ECS for our application. I wanted a structured way to generate AWS resources and I found that AWS Cloudformation is a great way to do this. It took a lot of trial and error to figure out everything, so I thought posting a rough tutorial might help others trying to do the same thing. Here we go…

There are a few things which need to be generated manually before you can bring up the stack:
1. Create a VPC with public subnets (or use an existing one) and a VPC key pair (or use an existing one). In this example, the VPC key pair name is VPC-Example-Key-Pair
2. Create an ECS Cluster. In this example, the ECS Cluster name is example-ecs-cluster
3. Create an s3 bucket and upload the initialization script specified below. In this example, the file is uploaded to this S3 path: s3://example-bucket/ecs/userdata/example-ecs-init-script.sh
4. Create an IAM Role and give it the “AmazonEC2ContainerServiceforEC2Role” policy, S3 access to the bucket where the EC2 initialization script lives, permission to register with an Elastic Load Balancer, and a few other things. I created a custom IAM policy for our ECS instances, see the policy below. Plus you may want to add policies for whatever other permissions are necessary for your application. In this example, the IAM Role name is example-ecs-role
5. Create an EC2 Security Group for the Load Balancer and the EC2 Instances
6. Optional – Create an SNS Notification Topic for EC2 instance autoscaling life cycle events. In this example, the SNS Topic name is example-ecs-autoscale-topic
7. Optional – Upload a SSL certificate for the load balancer. In this example the SSL Cert ARN is arn:aws:iam::1234567890:server-certificate/2016_wildcard.example.com

I decided to keep these things as manually generated. I wanted them to exist outside the life cycle of the cloudformation stack because they can be shared by more than one stack and I felt more comfortable maintaining them manually.

EC2 instance initialization script which tells ECS Agent which cluster the EC2 instance is associated with and provides authorization for private docker repo:

#!/bin/sh
 
# update host to latest packages
yum -y update
 
if [ -z "$1" ] || [ -z "$2" ];
then
    echo "${0} usage: [ECS Cluster Name] [Extended Volume Size in GB]"
    exit 0
fi
 
ecsCluster=$1
extendLvmBy=$2
 
cat > /etc/ecs/ecs.config << END
ECS_CLUSTER=$ecsCluster
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"quay.io": {"auth": "BIG LONG NASTY AUTH STRING","email": ""}}
ECS_LOGLEVEL=warn
END
 
# uncomment this if you want the ECS Agent to clean up after itself once per minute, not recommended
# see http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html for more info
#echo "ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION=1m" >> /etc/ecs/ecs.config
 
# do some daily docker cleanup which the ECS Agent does not seem to do
cat > /etc/cron.daily/docker-remove-dangling-images << END
#!/bin/sh
echo "Removing dangling images from docker host"
docker rmi \$(docker images -q -f "dangling=true")
END
 
# extend docker lvm with attached EBS volume
vgextend docker /dev/xvdcy
lvextend -L+${extendLvmBy}G /dev/docker/docker-pool
 
##
# Do other initialization stuff for your EC2 instances below
# such as running a logging container or installing custom packages
##

We use quay.io, but you could set up any private repo. Check out http://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html for more info.

There is also configuration near the end of the script for extending Docker’s LVM with an additional volume, in this example by 80GB. We found that in our development cluster where lots of images were being deployed, the default 22GB drive for the ECS Optimized image was not enough. The default 22GB is fine for most use cases, but I left this part in this example because it was difficult to figure out. You can probably delete the extra volume from the json config and the init script.

IAM policy for ECS instances:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt12345678900000",
            "Effect": "Allow",
            "Action": [
                "ecs:DeregisterContainerInstance",
                "ecs:DeregisterTaskDefinition",
                "ecs:DescribeClusters",
                "ecs:DescribeContainerInstances",
                "ecs:DescribeServices",
                "ecs:DescribeTaskDefinition",
                "ecs:DescribeTasks",
                "ecs:DiscoverPollEndpoint",
                "ecs:ListClusters",
                "ecs:ListContainerInstances",
                "ecs:ListServices",
                "ecs:ListTaskDefinitionFamilies",
                "ecs:ListTaskDefinitions",
                "ecs:ListTasks",
                "ecs:Poll",
                "ecs:RegisterContainerInstance",
                "ecs:RegisterTaskDefinition",
                "ecs:RunTask",
                "ecs:StartTask",
                "ecs:StopTask",
                "ecs:StartTelemetrySession",
                "ecs:SubmitContainerStateChange",
                "ecs:SubmitTaskStateChange",
                "ecs:UpdateContainerAgent",
                "ec2:Describe*",
                "ec2:AuthorizeSecurityGroupIngress",
                "elasticloadbalancing:Describe*",
                "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                "cloudwatch:ListMetrics",
                "cloudwatch:GetMetricStatistics",
                "cloudwatch:Describe*",
                "autoscaling:Describe*",
                "iam:PassRole",
                "iam:ListInstanceProfiles"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "Stmt12345678900001",
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::example-bucket/*",
                "arn:aws:s3:::example-bucket"
            ]
        }
    ]
}

Cloudformation json config file:

{
    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Example ECS Cluster - Creates a Load Balancer, AutoScaling Group and LaunchConfiguration against an EXISTING VPC and EXISTING ECS Cluster",
    "Parameters": {
        "EcsClusterName": {
            "Type": "String",
            "Description": "ECS Cluster Name",
            "Default": "example-ecs-cluster"
        },
        "Vpc": {
            "Type": "AWS::EC2::VPC::Id",
            "Description": "VPC for ECS Clusters",
            "Default": "vpc-abc123def"
        },
        "SubnetIds": {
            "Type": "List<AWS::EC2::Subnet::Id>",
            "Description": "Comma separated list of VPC Subnet Ids where ECS instances should run",
            "Default": "subnet-abc123,subnet-efg456,subnet-lmn789"
        },
        "AvailabilityZones": {
            "Type": "List<AWS::EC2::AvailabilityZone::Name>",
            "Description": "AutoScaling Group Availability Zones. MUST MATCH THE SUBNETS AZ's",
            "Default": "us-east-1c,us-east-1d,us-east-1e"
        },
        "EcsAmiId": {
            "Type": "AWS::EC2::Image::Id",
            "Description": "Amazon ECS Optimized AMI for us-east-1 region - see http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html",
            "Default": "ami-6df8fe7a"
        },
        "EcsInstanceType": {
            "Type": "String",
            "Description": "ECS EC2 instance type",
            "Default": "t2.nano",
            "AllowedValues": [
                "t2.nano",
                "t2.micro",
                "t2.small",
                "t2.medium",
                "t2.large",
                "m4.large",
                "m4.xlarge",
                "m4.2xlarge",
                "m4.4xlarge",
                "m4.10xlarge",
                "m3.medium",
                "m3.large",
                "m3.xlarge",
                "m3.2xlarge",
                "c4.large",
                "c4.xlarge",
                "c4.2xlarge",
                "c4.4xlarge",
                "c4.8xlarge",
                "c3.large",
                "c3.xlarge",
                "c3.2xlarge",
                "c3.4xlarge",
                "c3.8xlarge",
                "r3.large",
                "r3.xlarge",
                "r3.2xlarge",
                "r3.4xlarge",
                "r3.8xlarge",
                "i2.xlarge",
                "i2.2xlarge",
                "i2.4xlarge",
                "i2.8xlarge"
            ],
            "ConstraintDescription": "must be a valid EC2 instance type."
        },
        "KeyName": {
            "Type": "AWS::EC2::KeyPair::KeyName",
            "Description": "Name of an existing EC2 KeyPair to enable SSH access to the ECS instances",
            "Default": "VPC-Example-Key-Pair"
        },
        "IamRoleInstanceProfile": {
            "Type": "String",
            "Default": "example-ecs-role",
            "Description": "Name or the Amazon Resource Name (ARN) of the instance profile associated with the IAM role for the instance"
        },
        "AsgMinSize": {
            "Type": "Number",
            "Description": "Minimum Size Capacity of ECS Auto Scaling Group",
            "Default": "1"
        },
        "AsgMaxSize": {
            "Type": "Number",
            "Description": "Maximum Size Capacity of ECS Auto Scaling Group",
            "Default": "3"
        },
        "AsgDesiredCapacity": {
            "Type": "Number",
            "Description": "Initial Desired Size of ECS Auto Scaling Group",
            "Default": "2"
        },
        "AsgNotificationArn": {
            "Type": "String",
            "Description": "ECS Autoscale Notification SNS Topic ARN",
            "Default": "arn:aws:sns:us-east-1:1234567890:example-ecs-autoscale-topic"
        },
        "EcsClusterHostedZoneId": {
            "Type": "String",
            "Description": "Route53 Hosted Zone ID For ECS Cluster",
            "Default": "ABCDEFGHIJKLM"
        },
        "EcsClusterHostedZoneName": {
            "Type": "String",
            "Description": "Route53 Hosted Zone Domain Name For ECS Cluster",
            "Default": "myvpc.internal"
        },
        "EcsClusterHostedZoneInstanceName": {
            "Type": "String",
            "Description": "Route53 Hosted Zone Domain Name For ECS Cluster",
            "Default": "ecs-example"
        },
        "EcsPort": {
            "Type": "String",
            "Description": "Security Group port to open on ECS instances - defaults to port 80",
            "Default": "80"
        },
        "EcsHealthCheckEndpoint": {
            "Type": "String",
            "Description": "HealthCheck endpoint for application running on ECS cluster",
            "Default": "/healthcheck/endpoint/url"
        },
        "EcsSecurityGroup": {
            "Type": "AWS::EC2::SecurityGroup::Id",
            "Description": "ECS Instance Security Group",
            "Default": "sg-abc123def"
        },
        "LbSecurityGroup": {
            "Type": "AWS::EC2::SecurityGroup::Id",
            "Description": "Load Balancer Security Group",
            "Default": "sg-lmn456xyz"
        },
        "SslCertArn": {
            "Type": "String",
            "Description": "SSL Certificate ARN",
            "Default": "arn:aws:iam::1234567890:server-certificate/2016_wildcard.example.com"
        },
        "EC2InstanceInitScriptS3Path": {
            "Type": "String",
            "Description": "ECS Instance Init Script S3 path",
            "Default": "s3://example-bucket/ecs/userdata/example-ecs-init-script.sh"
        },
        "EcsEbsLvmVolumeSize": {
            "Type": "Number",
            "Description": "Size in GB of attached EBS volume for extending Docker's LVM disk space",
            "Default": "80"
        }
    },
    "Resources": {
        "EcsInstanceLc": {
            "Type": "AWS::AutoScaling::LaunchConfiguration",
            "Properties": {
                "ImageId": {
                    "Ref": "EcsAmiId"
                },
                "InstanceType": {
                    "Ref": "EcsInstanceType"
                },
                "AssociatePublicIpAddress": true,
                "IamInstanceProfile": {
                    "Ref": "IamRoleInstanceProfile"
                },
                "KeyName": {
                    "Ref": "KeyName"
                },
                "SecurityGroups": [
                    {
                        "Ref": "EcsSecurityGroup"
                    }
                ],
                "BlockDeviceMappings": [
                    {
                        "DeviceName": "xvdcy",
                        "Ebs": {
                            "DeleteOnTermination": "true",
                            "VolumeSize": {
                                "Ref": "EcsEbsLvmVolumeSize"
                            },
                            "VolumeType": "gp2"
                        }
                    }
                ],
                "UserData": {
                    "Fn::Base64": {
                        "Fn::Join": [
                            "",
                            [
                                "#!/bin/bash\n",
                                "yum install -y aws-cli\n",
                                "aws s3 cp ",
                                {"Ref": "EC2InstanceInitScriptS3Path"},
                                " /tmp/ecs-init.sh\n",
                                "chmod +x /tmp/ecs-init.sh\n",
                                "/tmp/ecs-init.sh ",
                                {"Ref": "EcsClusterName"},
                                " ",
                                {"Ref": "EcsEbsLvmVolumeSize"},
                                "\n"
                            ]
                        ]
                    }
                }
            }
        },
        "EcsInstanceAsg": {
            "Type": "AWS::AutoScaling::AutoScalingGroup",
            "Properties": {
                "AvailabilityZones": {
                    "Ref": "AvailabilityZones"
                },
                "VPCZoneIdentifier": {
                    "Ref": "SubnetIds"
                },
                "LaunchConfigurationName": {
                    "Ref": "EcsInstanceLc"
                },
                "MinSize": {
                    "Ref": "AsgMinSize"
                },
                "MaxSize": {
                    "Ref": "AsgMaxSize"
                },
                "DesiredCapacity": {
                    "Ref": "AsgDesiredCapacity"
                },
                "NotificationConfigurations": [
                    {
                        "NotificationTypes": [
                            "autoscaling:EC2_INSTANCE_LAUNCH",
                            "autoscaling:EC2_INSTANCE_LAUNCH_ERROR",
                            "autoscaling:EC2_INSTANCE_TERMINATE",
                            "autoscaling:EC2_INSTANCE_TERMINATE_ERROR"
                        ],
                        "TopicARN": {
                            "Ref": "AsgNotificationArn"
                        }
                    }
                ],
                "Tags": [
                    {
                        "Key": "Name",
                        "Value": {
                            "Fn::Join": [
                                "",
                                [
                                    {
                                        "Ref": "EcsClusterName"
                                    },
                                    "-auto"
                                ]
                            ]
                        },
                        "PropagateAtLaunch": "true"
                    },
                    {
                        "Key": "DomainMeta",
                        "Value": {
                            "Fn::Join": [
                                ":",
                                [
                                    {
                                        "Ref": "EcsClusterHostedZoneId"
                                    },
                                    {
                                        "Ref": "EcsClusterHostedZoneName"
                                    },
                                    {
                                        "Ref": "EcsClusterHostedZoneInstanceName"
                                    }
                                ]
                            ]
                        },
                        "PropagateAtLaunch": "true"
                    }
                ],
                "LoadBalancerNames": [
                    {
                        "Ref": "EcsLb"
                    }
                ]
            }
        },
        "EcsLb": {
            "Type": "AWS::ElasticLoadBalancing::LoadBalancer",
            "Properties": {
                "Subnets": {
                    "Ref": "SubnetIds"
                },
                "SecurityGroups": [
                    {
                        "Ref": "LbSecurityGroup"
                    }
                ],
                "Instances": [],
                "Listeners": [
                    {
                        "LoadBalancerPort": "80",
                        "InstancePort": {
                            "Ref": "EcsPort"
                        },
                        "Protocol": "HTTP"
                    },
                    {
                        "LoadBalancerPort": "443",
                        "InstancePort": {
                            "Ref": "EcsPort"
                        },
                        "Protocol": "HTTPS",
                        "SSLCertificateId": {
                            "Ref": "SslCertArn"
                        }
                    }
                ],
                "HealthCheck": {
                    "Target": {
                        "Fn::Join": [
                            "",
                            [
                                "HTTP:",
                                {
                                    "Ref": "EcsPort"
                                },
                                {
                                    "Ref": "EcsHealthCheckEndpoint"
                                }
                            ]
                        ]
                    },
                    "HealthyThreshold": "2",
                    "UnhealthyThreshold": "2",
                    "Interval": "20",
                    "Timeout": "5"
                },
                "Tags": [
                    {
                        "Key": "Name",
                        "Value": {
                            "Fn::Join": [
                                "",
                                [
                                    {
                                        "Ref": "EcsClusterName"
                                    },
                                    "-lb"
                                ]
                            ]
                        }
                    }
                ]
            }
        }
    },
    "Outputs": {
        "EcsAutoScalingGroupName": {
            "Description": "AutoScaling Group Name which will manage creation of new ECS Instances",
            "Value": {
                "Ref": "EcsInstanceAsg"
            }
        },
        "EcsLaunchConfiguration": {
            "Description": "Launch Configuration the AutoScalingGroup will use when creating new ECS Instances",
            "Value": {
                "Ref": "EcsInstanceLc"
            }
        }
    }
}

Fill in all the parameters and then create the stack with the following AWS CLI command, assuming you have AWS CLI installed and configured with a user which has cloudformation permission =)

aws cloudformation create-stack --stack-name example-ecs-stack \
  --template-body file:///Users/yourname/path/to/cloudformation/ecs_example_stack-cluster.cloudformation.json \
  --tags Key=stack,Value=example-ecs Key=vpc,Value=example-vpc

You’ll note that there is Domain Meta tags for the EC2 Instance. In our system, these tags are used by a Lambda function which automatically sets Route53 DNS entries for ECS cluster instances by subscribing to the SNS Topic for autoscaling life cycle events. Very handy, I’ll put up that Lambda function in another post.

RDS snapshot and restore script

A few months ago we needed to automate manual snapshot creation and restore. This is the script I came up with…
https://gist.github.com/feedthefire/086799433b472b8d3d9e7e0921554eaf

Opcache issues resolved for symlink based atomic deploys and PHP-FPM

I recently ran into problems with php-fpm and opcache using symbolic links for atomic deploys. The solution was so simple, use $realpath_root instead of $document_root in your fastcgi config. Thank you nginx, you make me feel all warm and fuzzy inside.

fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;

Details below —

Folder structure:

├── current -> deploy1
├── deploy1
│   └── www/index.php
└── deploy2
    └── www/index.php

The problem:

Opcache does not use filesystem inodes when it saves file paths. What this means is when you flip the “current” symlink from “deploy1” to “deploy2”, the opcache thinks php-fpm requests are still referencing deploy1/www/index.php because the file path it has saved is current/www/index.php rather than the “real” file path.

The solution:

Have nginx to resolve the symbolic link before the request even gets to php-fpm:

fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;

Full disclosure, I found the above solution here: http://stackoverflow.com/a/23904770

I still recommend that you clear Opcache after doing a deploy and flipping the symlink, but this solution does not require you to do that.

Finding your iPad’s UDID for Test Flight

I don’t know what possessed the folks at apple to make it so incredibly difficult to find your iPad’s identifier, aka your iPad’s UDID. Not sure why its not just in the iPad’s about section of the settings app, but anyway, here is how you find your iPad’s UDID for your app developer’s test flight account:

  1. Connect your iPad to your computer and fire up iTunes.
  2. Go to your iPad’s summary page in iTunes
  3. Under your iPad’s name and capacity, click on the “Serial Number”
  4. “Serial Number” then changes to your UDID

So f’ing retarded. Thanks apple.

Hopefully this saves others time.

twitter bootstrap responsive container with padding and a border

I’m working on a project where I wanted to use the twitter bootstrap responsive css scaffolding, but ran into a bit of a snafu because I needed to add a border and background to the container class. 

I decided to extend the @media responsive stylesheet declarations to achieve this.  The padding is not quite as much as I would like, but its better than having the span divs right up on the edges of the container div and I’m too lazy to edit all the span widths.

.container,
.navbar-fixed-top .container,
.navbar-fixed-bottom .container {
	width: 940px;
	padding: 10px;
	border:1px solid #cecece;
	border-top:0px none;
	background:#fff;
}
@media (max-width: 767px) {
  body{
    padding-left: 0px;
    padding-right: 0px;
  }
  .container{
    padding:5px 19px;
    width: auto;
  }
}
@media (min-width: 768px) and (max-width: 979px) {
  .container,
  .navbar-fixed-top .container,
  .navbar-fixed-bottom .container {
    width: 724px;
    padding:10px 14px;
  }
}
@media (min-width: 1200px) {
  .container,
  .navbar-fixed-top .container,
  .navbar-fixed-bottom .container {
    width: 1170px;
    padding:10px 14px;
  }
}

PHP sendmail 90 second delay on Dotcloud [solved]

For months we have been having upstream timeout issues with our Dotcloud PHP service running nginx and php-fpm.  Amongst other causes, we found that after our instances were up for more than a day, php’s mail function using sendmail was consistantly taking exactly 90 seconds to send an email.  Unacceptable.

After going back and forth with Dotcloud’s support, we determined that sendmail was spawning 3 processes to send emails, and apparently on their boxes, it takes 30 seconds to spawn a new process.  Seems like it shouldn’t take that long, but it does.  The solution to this issue was to simply not use sendmail.  Instead, we are using SMTP protocol to talk to postfix locally on a php instance.  Originally I wanted to avoid sending emails via SMTP because it can be slow when authenticating to a remote server.  Because we are using SMTP locally with no authentication it is very fast.

In short, DO NOT USE sendmail on DotCloud PHP instances or it will cause all kinds of problems.

Upstream timeout issues with nginX + php-fpm + CodeIgniter + gzip + xdebug on DotCloud – [resolved]

We have been using DotCloud as our hosting platform for months now, and overall I have been extremely pleased with their service.  There were some bumps in the road early on while they were still in their beta phase, but things have been running very smoothly for a few months now.  Everything except an uncomfortable number of seemingly random nginX “upstream timed out (110: Connection timed out) while reading response header from upstream” errors.

If you want to get what you need and not read the rest of this post, I will make it super simple for you:

If you are using php-fpm, gzip, xdebug and CodeIgniter, disable xdebug!!!

For anyone who feels like listening to my story, please read on.  Not only were these errors unsettling, but they were causing DotCloud’s load balancer to bounce POST requests  back to our servers multiple times because it was getting error responses and assuming nothing was happening on our end.  This in turn was causing user input (comments, image uploads, password requests, etc) to be saved (or emails sent) on our application multiple times.  Super embarrassing.

After weeks of research and floundering, a DotCloud tech and I finally discovered the issue.  There is a known bug with xdebug and ob_gzhandler which was causing our php processes to seg fault.  The bug is documented here:

http://grokbase.com/p/php/php-bugs/0365rtcdgx/23985-bgs-ob-gzhandler-make-segmentation-fault

What was happening was the request was sent to our server, the php process was doing everything it was supposed to, then when CodeIgniter’s output buffer was being gzipped by ob_gziphandler(), the php process was segmentation faulting and causing nginx to time out waiting for the response from php-fpm.  So, while everything was successfully happening in the php script, the output back to the client was failing.

By disabling the xdebug extension in the php configuration, the php processes stopped seg faulting, and everything is happy again!  No more Upstream Timeouts! It took a really long time to track this issue down, so I hope this post helps someone =)

AddThis Pintrest button hack to look better in 32×32 format

Today I had to add a Pinterest Share button on our website, and run it through our AddThis account so we can track the analytics.  Unfortunately, there are only two options for the Pinterest button via AddThis, neither of which look good along side our 32×32 style share buttons.  So, I decided to hack it up =)

If you apply a fixed height to the a tag which holds the Pinterest button’s iframe, then set overflow to hidden, then apply a negative margin to the nested iframe, you can essentially get rid of that pesky count bubble that will not go away.

Resulting css is something like:

 
	.addthis_toolbox > a.addthis_button_pinterest_pinit{
		margin:8px 0px 4px;
		height:24px;
		overflow:hidden;
		vertical-align:baseline;
	}
	.addthis_toolbox > a.addthis_button_pinterest_pinit &gt; iframe{
		margin-top:-34px;
	}