AWS tags on tagged instances

Probably a better way to handle this, but occasionally I want to run a script against a resources that have a tag NEWKEY=NEWVALUE and I want to update a different set of instances to have that tag.

Get the instance id:

aws --profile MYPROFILE ec2 describe-instances --filters Name="tag:OLD_KEY",Values="PARTIAL_VALUE*" --query 'Reservations[*].Instances[*].InstanceId[]' --output=text

and an update tag example:

aws --profile MYPROFILE ec2 create-tags --resources i-000resource1 i-000resource2 --tags Key=NEWKEY,Value=NEWVALUE

and putting it all together for the lazy:

aws --profile MYPROFILE ec2 create-tags --resources $(aws --profile MYPROFILE ec2 describe-instances --filters Name="tag:KEY1",Values="PARTIAL_VALUE*" --query 'Reservations[*].Instances[*].InstanceId[]' --output=text) --tags Key=NEWKEY,Value=NEWVALUE

of course don’t forget your appropriate region tag if applicable

ldapsearch queries

ldap searches, always fun and powerful, but I so often spend too much time figuring out the syntax. Without further ado:

dump all users, the -E handles the pagination of requests when limited to 1000 results

ldapsearch -E pr=1000/noprompt -x -D "CN=serviceuser,OU=exampleorg,DC=example,DC=ad" -w PASSWORD -H "ldaps://example.com:636" -b "OU=Users,OU=ExampleOrg,DC=example,DC=ad" | tee -a /tmp/LDAP-DUMP-USERS.txt

-W will prompt for password instead of entering it on command line

and just for fun this will return the users thumbnail photo into /tmp

ldapsearch -E pr=1000/noprompt -x -D "CN=serviceuser,OU=exampleorg,DC=example,DC=ad" -w PASSWORD -H "ldaps://example.com:636" -s sub -b "CN=MY USER,OU=Users,OU=ExampleOrg,DC=example,DC=ad" -t thumbnailPhoto=* thumbnailPhoto| while read line ; do echo $line | egrep -q "^dn:" && name=`echo $line | sed 's/.*CN=\([^,]\+\).*/\1/'`; echo $line | egrep file && file=`echo $line | sed 's/.*file:\/\///'` && mv $file /tmp/$name.jpg ; done

Migrating AEM Users Groups & ACLs

There are a dozen posts about migrating users and groups for AEM and I have not found one that was fully correct yet, hopefully if you are finding similar this will help

Start with ACLs and follow the generic directions here:
https://helpx.adobe.com/experience-manager/kb/how-to-migrate-ACLs-from-one-AEM-instance-to-another.html
the path for the packager is
/miscadmin#/etc/acs-commons/packagers
remove all users unless migrating a specific list of user IDs
specify the paths
/etc/tags(/.*)
/etc/workflows/models(/.*)
/etc/dam/metadataeditor(/.*)
/etc/dam/tools(/.*)
/etc/dam/imageserver/macros(/.*)
/etc/replication/agents.author(/.*)
/libs/dam/gui/content/reports(/.*)
/libs/cq/core/content/nav(/.*)
/libs/dam/content/reports(/.*)
/content/dam(/.*)
/libs/cq/core/content/nav(/.*)
/conf(/.*)
/content/SITE(/.*)
are usually recommended, but any custom areas should be included. You do not want to remove all paths as that will specify all and likely fail.
create the package and install on the destination server.

create a groups package that includes

filter /home/groups
  exclude /home/groups/community
  exclude /home/groups/default
  exclude /home/groups/forms
  exclude /home/groups/mac
  exclude /home/groups/media
  exclude /home/groups/projects

build download and install the groups package on the destination server.

next identify the admin user path and anonymous user path for the source AEM instance and the destination folder for the admin user and anonymous user.

for instance if the source admin user path is /home/users/F/FHwU5RtdJrElQD83OIQV (easily found under /miscadmin or by looking at the url path from the touch user security screen)
then SRC ADMIN USER PATH = /home/users/F/FHwU5RtdJrElQD83OIQV

if /home/users/F/FHwU5RtdJrElQD83OIQV is the DST ADMIN USER PATH then
DST ADMIN FOLDER PATH = /home/users/F

create a users package that has

filter /home/users
  exclude /home/users/.*/.tokens
  exclude /home/users/.*/.tokens
  exclude /home/users/a/anonymous
  exclude /home/users/geometrixx
  exclude /home/users/mac
  exclude /home/users/media
  exclude /home/users/projects
  exclude /home/users/system
  exclude {SRC ADMIN USER PATH}
  exclude {SRC ANONYMOUS USER PATH}
  exclude {DST ADMIN FOLDER PATH}
  exclude {DST ANONYMOUS FOLDER PATH}

build download and install this package to the destination

this will exclude all users in the 2 folders that contain admin and anonymous in the destination
fix this by creating a package on the source with all the users in those parent directories.
there should be multiple filters with a subsequent exclude
for instance if /home/users/h was excluded and the below path is a user on source:

filter /home/users/h/heby6dvTRh46Zny4ghTA
  exclude /home/users/h/heby6dvTRh46Zny4ghTA/.tokens

repeat as necessary for all users in the 2 paths
build, download and install this package to the destination

EXCEPTION:
because you cannot migrate the /home or /home/users folder directly user permissions within the home folder are lost, specifically full access to own user.
You should be able to edit the following python script with the author url and the admin password and then run it to grant all users read to the home dir and full permissions on their own account.
different user permissions can be achieved in the update_user_perms function if needed

you can validate what this script is doing will work by running the below curl on a test user

Example assumes that /home/users/h/heby6dvTRh46Zny4ghTA is their path and that the test user is “testuser”, but you can also confirm the authorizableId by looking at the json at http://localhost:4502/home/users/h/heby6dvTRh46Zny4ghTA.1.json

use caution as this script is tested on AEM 6.3 only by me and could obviously use a little clean up


curl -u "admin:PASSWORD" -X POST -FauthorizableId=testuser -Fchangelog=path:/home/users/h/heby6dvTRh46Zny4ghTA,read:true,modify:true,create:true,delete:true,acl_read:true,acl_edit:true,replicate:true http://localhost:4502/.cqactions.html

SCRIPT

Installing ffmpeg 3.3 on aws


#!/bin/sh

if [ "`/usr/bin/whoami`" != "root" ]; then
echo "You need to execute this script as root."
exit 1
fi

cat > /etc/yum.repos.d/centos.repo<<EOF
[centos]
name=CentOS-6 Base
baseurl=http://mirror.centos.org/centos/6/os/x86_64/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
enabled=1
priority=1
protect=1
includepkgs=SDL SDL-devel gsm gsm-devel libtheora theora-tools libdc1394 libdc1394-devel libraw1394-devel
EOF
rpm --import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6

rpm -Uhv http://repository.it4i.cz/mirrors/repoforge/redhat/el6/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
yum -y update

##didnt remove this from 2.2 install, may not be needed
rpm -Uhv ftp://195.220.108.108/linux/centos/6.9/os/x86_64/Packages/libraw1394-2.0.4-1.el6.i686.rpm
rpm -Uhv ftp://195.220.108.108/linux/centos/6.9/os/x86_64/Packages/libraw1394-2.0.4-1.el6.x86_64.rpm

##some of this is left over from 2.2 install remove anything critical if you are concerned
yum -y install glibc gcc gcc-c++ autoconf automake libtool git make nasm pkgconfig
yum -y install SDL-devel a52dec a52dec-devel alsa-lib-devel faac faac-devel faad2 faad2-devel
yum -y install freetype-devel giflib gsm gsm-devel imlib2 imlib2-devel lame lame-devel libICE-devel libSM-devel libX11-devel
yum -y install libXau-devel libXdmcp-devel libXext-devel libXrandr-devel libXrender-devel libXt-devel
yum -y install libogg libvorbis vorbis-tools mesa-libGL-devel mesa-libGLU-devel xorg-x11-proto-devel zlib-devel
yum -y install libtheora theora-tools
yum -y install ncurses-devel
yum -y install libdc1394 libdc1394-devel
yum -y install amrnb-devel amrwb-devel opencore-amr-devel
yum -y install bzip2 cmake make mercurial

rpm -Uhv http://www.nasm.us/pub/nasm/stable/linux/nasm-2.13.01-0.fc24.x86_64.rpm

mkdir ~/ffmpeg_sources

cd ~/ffmpeg_sources
curl -O http://www.tortall.net/projects/yasm/releases/yasm-1.3.0.tar.gz
tar xzvf yasm-1.3.0.tar.gz
cd yasm-1.3.0
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin"
make
make install

cd ~/ffmpeg_sources
git clone --depth 1 http://git.videolan.org/git/x264
cd x264
PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-static
make
make install
echo

cd ~/ffmpeg_sources
hg clone https://bitbucket.org/multicoreware/x265
cd ~/ffmpeg_sources/x265/build/linux
cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="$HOME/ffmpeg_build" -DENABLE_SHARED:bool=off ../../source
make
make install
echo

cd ~/ffmpeg_sources
git clone --depth 1 https://github.com/mstorsjo/fdk-aac
cd fdk-aac
autoreconf -fiv
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
echo

cd ~/ffmpeg_sources
curl -L -O http://downloads.sourceforge.net/project/lame/lame/3.99/lame-3.99.5.tar.gz
tar xzvf lame-3.99.5.tar.gz
cd lame-3.99.5
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --disable-shared --enable-nasm
make
make install
echo

cd ~/ffmpeg_sources
curl -O https://archive.mozilla.org/pub/opus/opus-1.1.5.tar.gz
tar xzvf opus-1.1.5.tar.gz
cd opus-1.1.5
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install

cd ~/ffmpeg_sources
curl -O http://downloads.xiph.org/releases/ogg/libogg-1.3.2.tar.gz
tar xzvf libogg-1.3.2.tar.gz
cd libogg-1.3.2
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
echo

cd ~/ffmpeg_sources
curl -O http://downloads.xiph.org/releases/vorbis/libvorbis-1.3.4.tar.gz
tar xzvf libvorbis-1.3.4.tar.gz
cd libvorbis-1.3.4
./configure --prefix="$HOME/ffmpeg_build" --with-ogg="$HOME/ffmpeg_build" --disable-shared
make
make install
echo

cd ~/ffmpeg_sources
git clone --depth 1 https://chromium.googlesource.com/webm/libvpx.git
cd libvpx
./configure --prefix="$HOME/ffmpeg_build" --disable-examples --as=yasm
PATH="$HOME/bin:$PATH" make
make install
echo

cd ~/ffmpeg_sources
curl -O http://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2
tar xjvf ffmpeg-snapshot.tar.bz2
cd ffmpeg
PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --extra-cflags="-I$HOME/ffmpeg_build/include" --extra-ldflags="-L$HOME/ffmpeg_build/lib -ldl" --bindir="$HOME/bin" --pkg-config-flags="--static" --enable-gpl --enable-nonfree --enable-libfdk_aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265
make
make install
hash -r

cp $HOME/bin/* /usr/bin/
ffmpeg -version

Statsd(bucky) / graphite / grafana

graphite install
yum install -y gcc gcc-c++ libffi libffi-devel httpd24 httpd24-tools mysql-server mysql MySQL-python27 mod24_wsgi-python27 cairo-devel freetype* urw-fonts
pip install cairocffi pytz scandir

export PYTHONPATH=”/opt/graphite/lib/:/opt/graphite/webapp/”
pip install –no-binary=:all: https://github.com/graphite-project/whisper/tarball/master
pip install –no-binary=:all: https://github.com/graphite-project/carbon/tarball/master
pip install –no-binary=:all: https://github.com/graphite-project/graphite-web/tarball/master

export GRAPHITE_ROOT=/opt/graphite

vim $GRAPHITE_ROOT/webapp/graphite/local_settings.py
add —
DATABASES = {
‘default’: {
‘NAME’: ‘graphiteDB’,
‘ENGINE’: ‘django.db.backends.mysql’,
‘USER’: ‘graphite’,
‘PASSWORD’: ‘${PASSWORD}’,
‘HOST’: ‘localhost’,
‘PORT’: ‘3306’
}
}

sudo /etc/init.d/mysqld start
sudo /usr/bin/mysqladmin -u root password ‘${PASSWORD}!’
mysql -u root -p${PASSWORD}
CREATE USER ‘graphite’@’localhost’ IDENTIFIED BY ‘${PASSWORD}’;
GRANT ALL PRIVILEGES ON *.* TO graphite@’%’ IDENTIFIED BY ‘${PASSWORD}’;
GRANT ALL PRIVILEGES ON *.* TO graphite@’localhost’ IDENTIFIED BY ‘${PASSWORD}’;
FLUSH PRIVILEGES;
exit;
mysql -u graphite -p${PASSWORD}
create database graphiteDB;
exit;

PYTHONPATH=$GRAPHITE_ROOT/webapp django-admin.py migrate –settings=graphite.settings –run-syncdb
PYTHONPATH=$GRAPHITE_ROOT/webapp django-admin.py collectstatic –noinput –settings=graphite.settings

cp /opt/graphite/examples/example-graphite-vhost.conf /etc/httpd/conf.d/graphite-vhost.conf
vim /etc/httpd/conf.d/graphite-vhost.conf
add —
<Directory /opt/graphite/static/>
<IfVersion < 2.4>
Order deny,allow
Allow from all
</IfVersion>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
</Directory>

cp /opt/graphite/conf/graphite.wsgi.example /opt/graphite/conf/graphite.wsgi

cd /opt/graphite/conf
from .example
carbon.conf
storage-aggregation.conf
storage-schemas.conf

yum install collectd
sudo pip install bucky
sudo mkdir /etc/bucky
vim /etc/bucky/bucky.conf
add contents from https://github.com/trbs/bucky
add “/usr/share/collectd/types.db” to types.db list in /etc/bucky/bucky.conf

create init scripts for bucky and carbon

#!/bin/bash
# bucky Init script for running the bucky daemon
#
#
# chkconfig: – 98 02
#
# description: some description
# processname: bucky

PATH=/usr/bin:/sbin:/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:$PATH
export PATH

lockfile=’/var/lock/subsys/bucky’
pidfile=’/var/run/bucky.pid’
bucky=’/usr/local/bin/bucky’
config=’/etc/bucky/bucky.conf’
logfile=’/var/log/bucky/bucky.log’

RETVAL=0

# Source function library.
. /etc/rc.d/init.d/functions

# Determine if we can use the -p option to daemon, killproc, and status.
# RHEL < 5 can’t.
if status | grep -q — ‘-p’ 2>/dev/null; then
pidopts=”-p $pidfile”
fi

start() {
echo -n $”Starting bucky daemon: ”
$bucky $config >> $logfile 2>&1 &
RETVAL=$?

local PID=`pgrep -f “${bucky} ${config}”`
echo $PID > ${pidfile}

[ $RETVAL -eq 0 ] && (touch ${lockfile}; echo_success) || echo_failure
echo

return $RETVAL
}

stop() {
echo -n $”Stopping bucky daemon: ”
killproc $pidopts $bucky
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f ${lockfile} ${pidfile}
}

restart() {
stop
start
}

rh_status() {
status $pidopts $bucky
RETVAL=$?
return $RETVAL
}

rh_status_q() {
rh_status >/dev/null 2>&1
}

case “$1″ in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
condrestart|try-restart)
rh_status_q || exit 0
restart
;;
status)
rh_status
;;
*)
echo $”Usage: $0 {start|stop|restart|condrestart|try-restart|status}”
exit 1
esac

exit $RETVAL

yum install https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.2.0-1.x86_64.rpm

Monitoring AEM part 1

Adobe AEM and CQ provides a wealth of data to monitor and with recent edition provides monitoring pages on the individual instances. However, if you are an administrator with hundreds (or thousands of servers), or in the service industry case AEM instances, it is not practical to log in to the individual server to monitor them and proactive monitors are necessary. Having a repeatedly deployable monitoring solution that integrates with your existing alerting system is key, so here is the simple mechanism to do so within AEM.

JMX mbeans. Thats it, no special secret or sauce required. AEM provides all of the metrics that you can monitor within the application as exposed mbeans. simply start your instance with rmi flags to enable monitoring

-Dcom.sun.management.jmxremote.port=8001 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=localhost

for instance will allow the localhost instance to monitor the solution. Do not allow non-localhost connections without proper authentication.

to get a feel for what metrics are available to you pull down jmxterm and run it as in the following example.

$ wget http://downloads.sourceforge.net/cyclops-group/jmxterm-1.0-alpha-4-uber.jar
$ java -jar jmxterm-1.0-alpha-4-uber.jar
$ open localhost:8001
$ beans
$ info -b com.adobe.granite.replication:id="flush",type=agent
$ get -b com.adobe.granite.replication:id="flush",type=agent QueueBlocked

In the example above, you will see all the mbeans available for AEM, then what attributes are available for the replication agent flush and finally whether that queue is blocked (a common admin responsibility for AEM).

So what is next? repeatability, scalability, trending and integration are key to successfully deploying this and ensuring that config management tasks can deploy the solution. There are challenges with most of the OTS RMI/JMXmbean monitors out there. Specifically the interpretation of booleans and Java primitives that AEM uses.

I plan to cover how I collect, store, dashboard and alert on AEM metrics in the future and most of it is applicable to any application, but if you can’t wait for future updates leave a message in the comments and I will get back to you.

Initializing a Roku without remote

So I recently bought a Roku 3 and the remote it came with would not pair with the box over the wifi direct for the initial setup. Phone Roku Apps were useless since the Roku was not yet setup or configured. Here is the easy method to get through the setup and use the phone app if you aren’t patient enough to wait the 3-5 days for a replacement remote.

Plug the Roku into the ethernet cable (and TV/Power)
log into your router to get the IP address of the Roku (or scan your block).
telnet to the roku on port 8080
the press command will now let you control the screen (see below, you can issue multiple commands in one go, i.e. press ududud will go up, down, up, down, etc.)
Set up the wired connection first! (do not set up wireless as this drops the connection and requires a factory reset)
Roku will then likely download updates and after the updates telnet will be disabled until you perform a factory reset
telnet back into the Roku and re setup the wired configuration and finish the initialization
at this point the phone apps will work, use them to set up the wireless config unplug from ethernet, reboot and you are good to go.

>press
h Home
u Up
d Down
r Right
l Left
s Select
f,> Fwd
b,< Rev
p Play
y InstantReplay
i Info
k Back
= Backspace
o PlayOnly
t Stop
e Enter
a A
c B
n Closed Caption
? Search
[ Volume Down
] Volume Up
\ Volume Mute
~ Input Source
` Input Source Prev
x1-x9 Partner1 – Partner9
xA-xH Partner10 – Partner17
! Power
@ PowerOn
# PowerOff

Pushin’ and Popin’ like a pro in Bash

This was blatantly stolen from here.

Ever wish

$ cd -

took you back to the previous directory more than once?

The answer was so obvious once I saw it that I smacked my head and said “D’OH” out loud (causing several people around the office to give me that ‘wth!?’ look.)

Stick this in your .bashrc and be welcomed into the world of directory history:

function cd {
    if (("$#" > 0)); then
        if [ "$1" == "-" ]; then
            popd > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    else
        cd $HOME
    fi
}

I suppose that one could just type ‘pushd’ or ‘popd’ or alias those to shorter commands, but my muscle memory has simply chiseled cd (and ls for that matter) into stone.

JMXTrans take 2

I’ve been doing a lot of work on Amazon and Rackspace servers lately. This has led to some maintainability issues with my jmxtrans install. Namely every new server, or every time an instance is moved, I have to reconfigure in order to poll it. So I decided to switch to the jmxtrans-agent and push monitoring.

Unfortunately one of the apps I was monitoring stores a long[60] as the mBean value and the jmxTrans-agent was unable to iterate over it. I opened a ticket and had an immediate response as well as a patch the following day. Hats off to Cyrille and the team and thanks for an awesome tool!

Phantomjs & Quality Engineering

I’m not about to dismiss the need for selenium or other tools to test specific browsers, but when it comes to quickly getting an indication of how long it takes for a site to render and whether there are any errors, phantomjs is hard to beat. The script below opens a webpage in a specific viewport size, prints the title, reports the timing and captures the result as a png.

This is critical for testing responsive design when the website will render differently at different screen sizes (break points). Having something like this configured and scheduled to run early for all of a project’s templates means that it is easy to track and identify when changes in latency or failures occur. As always transparency becomes key and having this executed automatically as part of the build process creates a small closed feedback loop that helps ensure quick turn around times in the event of a problem.


var page = require('webpage').create(),
system = require('system'),
t, address;
swidth = '1366';
sheight = '768';
if (system.args.length === 1) {
  console.log('Usage: test.js optional( <screen width> <screen height> )');
phantom.exit();
}
t = Date.now();
address = system.args[1];
if (system.args[2]) {
  swidth = system.args[2];
}
if (system.args[3]) {
  sheight = system.args[3];
}
page.onConsoleMessage = function (msg) {
  console.log('Page title is ' + msg);
};
page.onInitialized = function () {
  page.evaluate(function (swidth, sheight) {
    (function () {
      window.screen = {
        width: swidth,
        height: sheight
      };
    })();
  }, swidth, sheight);
};
page.open(address, function (status) {
  if (status !== 'success') {
    console.log('FAIL to load the address');
  } else {
    t = Date.now() - t;
    console.log('The default user agent is ' + page.settings.userAgent);
    console.log('Loading time ' + t + ' msec');
    console.log(JSON.stringify(page.evaluate(function () { return window.screen })));
    page.render('images/test.png');
    page.evaluate(function () {
      console.log(document.title);
    });
  }
  phantom.exit();
});

Taken a step further you can then calculate the average render time for an entire site backed by a CMS by tracking outbound links from a given entry point (the homepage).

Pulling the links off the page looks like this:

function getLinks() {
  var links = document.querySelectorAll('li a');
  return Array.prototype.map.call(links, function(aLink) {
    return aLink.getAttribute('href');
  });
}

Using this method also creates a report of all the pages for a website weighted by outbound links. For high content sites that are backed by a CMS this means that I now have a current list of pages that is representative of inbound traffic to the website. This ends up being a critical piece of performance testing (in addition to any transactional elements).