1. 1
  1. Active Passive HA NAT

    Here at Shift Labs we are in the process of over hauling our AWS infrastructure and we would like to share our active-passive HA NAT (AP-HA-NAT) solution.

    Components involved:

    •    Sensu Server

    •    Redis

    •    AWS NAT boxes


    We have multiple projects each running in their own AWS VPC with several public and private subnets per VPC. All these projects can talk to Ops VPC but cannot talk to each other. Given the multi VPC setup, we have an Ops VPC which houses all the Ops resources like monitoring, CI/ CD, graphing etc.

    It goes without saying that we need to monitor all these projects.  Since we have public-private subnet setup NAT boxes were our SPOF. We are aware of HA_NAT.sh option but in our case we wanted to monitor all NAT boxes from a central place (Ops VPC) and we wanted our monitoring solution to trigger an action in case of a NAT failure which would alert us in addition to taking an action (For sanity sake, one place that in responsible for all kinds of alerts and actions).    

    Given these requirements, we set out to build active-passive high availability NAT (AP-HA-NAT) solution.

    We are using Sensu as our monitoring solution. It is sitting in the Ops VPC (Ops VPC might appear to be putting all eggs in one basket but we have redundancy inside the VPC).  The goal was to come up with a way we could:

    1) Have sensu always aware of what projects we have and what their NAT boxes are.

    2) Track those NAT boxes in order to ensure that they all are healthy.

    We use masterless saltstack as our config deployment tool. In order to keep Sensu always updated with the latest projects and their respective NAT boxes we wrote a custom salt module which uses python boto to update the sensu config file with the latest projects, their NAT boxes, and some other project specific information.

    On the NAT side, we are using Linux Amazon EC2 NAT instances. These are pre-configured NAT boxes with a bare minimum setup specifically meant for NAT purposes only.

    In our design, we run a simple python script to update a redis server in our Ops VPC. The script and all requirements are installed via the user_data script during provisioning. The script pulls its instance-id from the instance metadata at It then uses the instance-id as a key and the current epoch time as the value for a Redis “SET” command.

    Now we have Redis with a key-value for every NAT box in the infrastructure. Our Sensu box, armed with all the project specific NAT information, has a python check that simply does a “GET” for all NATs instance ids available to it. The expected instance-ids are defined in the Sensu check’s configuration which was generated by salt using boto. We grab the epoch time, which is the value to the instance-id key. If the time difference between grabbed value and now is greater than a specified time it updates the route in the route table (using python boto again) and reboots the box which hasn’t updated its status in Redis.

    Simply updating the route without rebooting the box didn’t work because all the boxes in the VPC whose NAT was updated were stilling trying to talk to the old NAT. We posted this issue in the AWS forum but haven’t got any response yet as to why this behavior occurs.

    This solution fits in well in our setup where we want all monitoring to go in one place and our monitoring solution to take an action in an alert situation.

  2. Visualizing Code Coverage with Nose, Gevent, and Duvet

    In this post we’ll be introducing a few Python related tools that should aid you in day to day development. Anyone using Python 2.7 will benefit from the below.

    Gevent is a library for working with Greenlets, which allows python to use async I/O calls. With network heavy applications (databases, apis) gevent can help execute multiple calls in parallel. It’s the same model as used by Node.js and nginx, but in general does not expose callbacks to the end user. As a result, gevent’ed code looks very similar to Python code.

    Nose is a test runner. It has a great library of available plugins. One of those is Coverage, a great tool written by Ned Batchelder. Unfortunately it doesn’t support gevent out of the box. Fortunately, there’s a fork which adds gevent support. You can install the gevent enabled version with the following:

    pip install git+https://github.com/newbrough/coverage.git

    Run your tests. We use a custom test runner, cleverly named bin/nose, which handles some setup (DB tables, elastic search mappings). Nose has an option to generate XML coverage files, but we need the .coverage file, and it seems the only way to generate seems to be to use the coverage utility. We only care about our own modules, which are all under the top level shift namespace.

    coverage run --include 'shift/*' bin/nose

    Coverage data is saved to .coverage

    Great, now we’ve got our test coverage, ideally we have a way to visualize it. Duvet is a nice tool for exploring your codebase in a color coded manner to quickly understand where your test coverage is lacking.

    pip install duvet

    Open duvet. It will give you a nice color coded file browser on the left. The code window on the right indicates code paths which are unreachable.


    Here’s a screenshot from the pybee site.

    Now you can browse through your code to learn what code paths have not been reached through your tests. You can evaluate whether you need to add tests to cover them, or remove them.

  3. Cassandra: tuning the JVM for read heavy workloads

    We recently completed a very successful round of Cassandra tuning here at SHIFT. This post will cover one of the most impactful adjustments we made, which was to the JVM garbage collection settings. I’ll be discussing how the JVM garbage collector works, how it was affecting our cluster performance, the adjustments we made, their effects, the reasoning behind them, and share the tools and techniques we used.

    The cluster we tuned is hosted on AWS and is comprised of 6 hi1.4xlarge EC2 instances, with 2 1TB SSDs raided together in a raid 0 configuration. The cluster’s dataset is growing steadily. At the time of this writing, our dataset is 341GB, up from less than 200GB a few months ago, and is growing by 2-3GB per day. The workload on this cluster is very read heavy, with quorum reads making up 99% of all operations.

    How the JVM’s garbage collection works, and how it affects Cassandra’s performance

    When tuning your garbage collection configuration, the main things you need to worry about are pause time and throughput. Pause time is the length of time the collector stops the application while it frees up memory. Throughput is determined by how often the garbage collection runs, and pauses the application. The more often the collector runs, the lower the throughput. When tuning for an OLTP database like Cassandra, the goal is to maximize the number of requests that can be serviced, and minimize the time it takes to serve them. To do that, you need to minimize the length of the collection pauses, as well as the frequency of collection.

    With the garbage collector Cassandra ships with, the jvm’s available memory is divided into 3 sections. The new generation, the old generation, and the permanent generation. I’m going to be talking mainly about the new and old generation. For your googling convenience, the new gen is collected by the Parallel New (ParNew) collector, and the old gen is collected by the Concurrent Mark and Sweep (CMS) collector.

    The New Generation

    The new generation is divided into 2 sections: eden, which takes up the bulk of the new generation, and 2 survivor spaces. Eden is where new objects are allocated, and objects that survive collection of eden are moved into the survivor spaces. There are 2 survivor spaces, but only one is occupied with objects at a time, the other is empty.

    When eden fills up with new objects, a minor gc is triggered. A minor gc stops execution, iterates over the objects in eden, copies any objects that are not (yet) garbage to the active survivor space, and clears eden. If the minor gc has filled up the active survivor space, it performs the same process on the survivor space. Objects that are still active are moved to the other survivor space, and the old survivor space is cleared. If an object has survived a certain number of survivor space collections, (cassandra defaults to 1), it is promoted to the old generation. Once this is done, the application resumes execution.

    The two most important things to keep in mind when we’re talking about ParNew collection of the new gen are:

    1) It’s a stop the world algorithm, which means that everytime it’s run, the application is paused, the collector runs, then the application resumes.

    2) Finding and removing garbage is fast, moving active objects from eden to the survivor spaces, or from the survivor spaces to the old gen, is slow. If you have long ParNew pauses, it means that a lot of the objects in eden are not (yet) garbage, and they’re being copied around to the survivor space, or into the old gen.

    The Old Generation

    The old generation contains objects that have survived long enough to not be collected by a minor GC. When a pre-determined percentage of the old generation is full (75% by default in cassandra), the CMS collector is run. Under most circumstances, it runs while the application is running, although there are 2 stop the world pauses when it identifies garbage, but they are typically very short, and don’t take more than 10ms (in my experience). However, if the old gen fills up before the CMS collector can finish, it’s a different story. The application is paused while a full gc is run. A full GC checks everything: new gen, old gen, and perm gen, and can result in significant (multi-second) pauses. If you’re seeing multi-second GC pauses, you’re likely seeing major collections happening. If you’re seeing these, you need to fix your gc settings.

    Our performance problems

    As our dataset grew, performance slowly started to degrade. Eventually, we reached a point where nodes would become unresponsive for several seconds or more. This would then cause the clusters to start thrashing load around, bringing down 3 or more nodes for several minutes.

    As we looked into the data on opscenter, we started to notice a pattern. Reads per second would increase, then the par new collection time and frequency would increase, then the read latency times would shoot up to several seconds, and the cluster would become unresponsive.

    So we began tailing the gc logs, and noticed there were regular pauses of over 200ms (ParNew collections), with some that were over 15 seconds (These were Full GCs). We began monitoring Cassandra on one or two nodes with jstat during these periods of high latency.

    jstat is a utility that ships with the jvm, it shows what is going on in your different heap sections, and what the garbage collector is doing. The command jstat -gc <pid> 250ms 0 will print the status of all generations every quarter second. Watching the eden figures, we could see that eden was filling up several times per second, triggering very frequent minor collections. Additionally, the minor collection times were regularly between 100 and 300 milliseconds, and up to 400 milliseconds in some cases. We were also seeing major collections happening every few minutes that would take 5-15 seconds. Basically, the garbage collector was so far out of tune with Cassandra’s behavior that Cassandra was spending a ton of time collecting garbage. Cutting the number of requests isn’t a real solution, and iostat made it pretty clear that the disk was not the bottleneck (read throughput was around 2MB/sec), so adding new nodes would be an expensive waste of hardware (we’d also tried adding new nodes, and it hadn’t helped).

    Given this information, we came up with the following hypothesis: Each read request is allocating short lived objects for both the result being returned to the client/coordinator, as well as objects that actually process the request (iterators, request/response objects, etc, etc). With the rate that the requests are coming in, and the frequency of new gen collections, it seemed pretty likely that a lot of the objects in eden at the start of a gc would be involved in the processing of requests, and would therefore, be garbage very soon. However, given the rate of requests and ParNew collections, they weren’t yet garbage when inspected by the par new collector. Since 99% of the requests are reads, requests don’t have any long term side effects, like mutating memtables, so there’s no reason why they need to be promoted out of eden.

    If this hypothesis was true, it had 2 implications:

    First, the par new collection is going to take a long time because it’s copying so many objects around (remember, collecting garbage is fast, copying objects between eden/survivor spaces and generations is slow). The 200ms ParNew collection times indicated this was happening.

    Second, all of these transient request related objects are getting pushed into the new gen, which is quickly getting filled up with objects that will soon be garbage. If these transient objects are moved into the old gen faster that the CMS collector can keep up, a major gc will be triggered, stopping cassandra for several seconds.

    If this was the case, it seemed likely that increasing the size of eden would solve our problems. By reducing the rate that eden reaches capacity, more of eden’s contents will be garbage. This will make the par new collection faster, and reduce the rate that transient objects are pushed into the old gen. More importantly, objects would be promoted at a rate that the CMS collector can handle, eliminating major, multi second, stop the world collections.

    I didn’t take any screen shots of jstat when the garbage collector was misbehaving, but this is an approximation of what we were seeing.

    In this image, we can see that there are a lot of new gen collections (see the YGCT column). And we can see the survivor section usage switching back and forth very often, indicating a lot of young gen collections. Additionally, the old gen is continuously increasing as objects are prematurely promoted.

    New GC settings

    The initial heap settings were a total heap size of 8GB, and a new gen size of 800MB. Initially, we tried doubling the new gen size to 1600MB, and the results were promising. We were not having any more runaway latency spikes, but we were still seeing read latencies as high as 50ms under heavy load, which, while not catastrophic, made our application noticably sluggish. The new gen collection times were still higher than 50ms.

    After a few days of experimenting with various gc settings, the final settings we converged on was 10GB total for the heap, and 2400MB for the new gen. We had increased the total heap by 25%, and tripled the size of the new gen. The results have been excellent. With these settings, I haven’t seen the read latencies go above 10ms, and I’ve seen the cluster handle 40 thousand plus reads per second with latencies around 7ms. New gen collection times are now around 15ms, and they happen slightly less than once per second. This means that Cassandra went from spending around 20% or more of it’s time collecting garbage, to a little over 1%.

    This is a look at the garbage collection activity on one of our tuned up nodes today.

    You can see the eden consumption creep up over 2 seconds (see the EU column), then a minor GC is performed. Additionally, the old gen size is pretty stable.

    Tools we used to diagnose the problems.

    1) Opscenter: Datastax’s opscenter tool was very helpful and provided a highlevel view of our cluster’s health and performance

    2) GC Logging: They’re not enabled by default, but the garbage collection logs give a lot of insight into the what the garbage collector is doing, and how often it’s doing it. To enable the gc logs, uncomment the GC logging options in cassandra-env.sh

    3) iostat: reports disk usage. Running iostat -dmx 1 will print out your disk usage stats every second. You can use this to quickly determine if disk is your bottleneck.

    4) jstat: as mentioned earlier, jstat provides a real time look at what gc is doing, and is very helpful. With jstat, you can watch the usage of eden, the survivor spaces, and the old gen, gc counts and times, and watch as the jvm shifts things arounds the different sections. Using the command jstat -gc <pid> 250ms 0 will print the status of all generations every quarter second.

    For experimentation, we used a single node in our production cluster as our test bed. We would make incremental changes to the node’s settings and watch how it performed relative to the other nodes.


    Oracle article on gc tuning

    Oracle article on jstat usage

  4. An Introduction to using Custom Timestamps in CQL3

    One interesting feature of Cassandra that’s exposed in CQL is applying a custom timestamp to a mutation. To understand what the impact of this is, we first need to dive into Cassandra’s internals a bit. Once we understand how reads and writes are handled, we can start to explore potential uses for custom timestamps.

    First, lets discuss storage. At the most basic level, each piece of data is stored in a Cell. A cell has these properties:

    protected final CellName name;
    protected final ByteBuffer value;
    protected final long timestamp;

    On a write, the timestamp stored here can be optionally provided by the client (via the USING TIMESTAMP clause), or generated automatically via the server. When records are written, they will always have a timestamp associated with them. Due to the nature of how Cassandra’s data is stored on disk, it’s possible to have multiple Cells for a given column name, and the timestamps are used to determine which one is the most current. This exists for values inserted as well as deleted, in the form of a tombstone. I suggest reading this doc from the Cassandra wiki to learn more.

    There is another implementation detail of Cassandra that’s extremely important: in most cases, inserts are exactly the same as updates. There is no differentiation because everything’s effectively an insert. Data is never updated, it’s simply written and merged on reads.

    Now that we understand how Cassandra uses the timestamp to resolve which is the “correct” data value, we can start to think of ways to make custom timestamps useful. The one which we’ve started using at SHIFT is writing deletions into the future.

    Why would we want this? Because a deletion written 30 seconds into the future will cause any mutations for the next 30 seconds to effectively be ignored. This can be used as an extremely cheap lock out mechanism.

    Lets consider an example. We have a group_membership table, which lets us view all the groups a particular user is in. We also store if the user is an admin, and when they last visited the group.

    create table group_membership (         
        user_id int,         
        group_id int,         
        admin boolean, 
        last_visited timestamp,
        primary key (user_id, group_id)

    What happens when we want to remove the user from the group? We issue a delete. What if we want to update the last_visited timestamp? The update looks like this:

    cqlsh:test> update group_membership set last_visited = '2013-12-26' where user_id = 1 and group_id = 1;

    As we mentioned before, this update is effectively the same as an insert. I performed the above query on an empty table, and yet the data is there:

    cqlsh:test> select * from group_membership;
     user_id | group_id | admin | last_visited
           1 |        1 |  null | 2013-12-26 00:00:00-0800

    This behavior is convenient for the sake of fast writes, but can be problematic when race conditions are introduced. For example, consider this series of events:

    1. Membership is read (thread 1)
    2. Membership is deleted (thread 2)
    3. Membership last_visited is updated (thread 1)

    The net result of this will be a record for the user & group, even though they were removed. With a relational DB, the update would do nothing if it was issued after the delete. With Cassandra we have to be careful.

    At this point, if all we do to check for the users membership in a group is look for the existence of a record, we end up with a false positive. The solution? Delete the membership into the future. Our sequence looks like this:

    # the future!
    delete from group_membership using timestamp 1388101196179000 where user_id = 1 and group_id = 1;
    1. Membership is read (thread 1)
    2. Membership is deleted one minute into the future (thread 2)
    3. Membership last_visited is updated (thread 1, but it doesn’t matter as long as it’s within the 1 minute window)

    We have not completely removed the opportunity for a race condition, but we’ve made it extremely unlikely. This is a faster alternative to locking, since locking requires a read before write.

    If you’re a Python user and using cqlengine, you’ll may find it useful to know that we’re working on adding custom timestamp support to the next cqlengine release. We’ll put out a blog post going over that functionality when it’s released.

  5. CORS with Wildcard Subdomains Using Nginx

    First off - what is CORS? CORS is a means of allowing cross site requests. You can read up in lengthy detail on it’s features here. Simply put, it lets you be on one domain, and perform XMLHttpRequests to another, which is normally not allowed due to the Same Origin Policy.

    The domains that may hit your server must be specified in your configuration. You are allowed to use a blanket wildcard, but if you’re allowing cookie sharing, you’re even more restricted in that you need to specify exact domains and wildcards are not allowed. But what if you want to allow *.yoursweetdomain.com? It turns out that’s not supported by the spec, but you can, with some trickery, make it happen. Here’s an example of an nginx server config allowing CORS from any subdomain of yoursweetdomain.com:

    server {
        root /path/to/your/stuff;
        index index.html index.htm;
        set $cors "";
        if ($http_origin ~* (.*\.yoursweetdomain.com)) {
            set $cors "true";
        server_name yoursweetdomain.com;
        location / {
            if ($cors = "true") {
                add_header 'Access-Control-Allow-Origin' "$http_origin";
                add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE, PUT';
                add_header 'Access-Control-Allow-Credentials' 'true';
                add_header 'Access-Control-Allow-Headers' 'User-Agent,Keep-Alive,Content-Type';
            if ($request_method = OPTIONS) {
                return 204;

    You can match any regular expression you’re interested in, not just domains, but for simplicity’s sake, that’s what I’m showing. The server will return in it’s header the same server that the request originated, and only if it matches the regex. It’s currently broken out to use an if statement and a set, because it’s easier to work with if you want to potentially match on multiple rules.

    In figuring all this out this gist proved to be extremely helpful.

  6. Migrating Databases With Zero Downtime

    We recently completed a massive database migration here at SHIFT.com. We migrated our application from using MongoDB / Titan as the primary datastore, to Cassandra, with no downtime or performance degradation.

    Jon Haddad and I recently did a webinar with DataStax to talk about the migration, and there was a lot of interest not only in our experience moving to Cassandra, but also with the specifics of how we migrated our entire application with no downtime. This post talks about the mechanics of how we went about migrating the site. It doesn’t get into data modeling or performance numbers. If you missed the webinar, it answers some of those questions, which you can catch here: http://www.youtube.com/watch?v=9XHigNKJJhI.


    During the course of the past year, we had been battling performance problems with Titan database, as well as the devops hassles of running a MongoDB cluster. Additionally, an unrelated backend data collection system running MongoDB had suddenly hit a performance ceiling, after which scaling became extremely difficult… something we wanted to avoid with SHIFT. One of the main causes of pain with both databases was that they provided too much abstraction over how your data was laid out on disk and across machines, which made performance unpredictable. What attracted us to Cassandra was that Cassandra provides a great deal of control over how your data is laid out on disk, and exposes it through an elegant query language. It is also much easier to work with from a DevOps perspective.

    After a great deal of thought, discussion, and testing, we decided that moving our application to Cassandra would be the best move, long term, for SHIFT.


    Clearly, migrating an entire application from one database to another is a big undertaking. Our situtation was complicated by the fact that we weren’t migrating from MySQL to PostgreSQL, the databases we were moving between had completely different data models, so none of the existing schemas would work with Cassandra. Additionally, SHIFT is an active application that is central to many people’s day to day work. So we needed to perform this migration without any downtime, and without any degradation in performance. Also, since there were so many components to migrate, pulling a few all nighters, shutting the site down, migrating the data and then bringing the site back up wasn’t a sustainable or practical approach. We needed to perform the migrations during the day, while people were using it.

    Our Solution

    The solution we came up with was to split the migration into two parts: writing, then reading.

    For each component that we were migrating, we would come up with a data schema that made sense for that part of the system. We would then make a branch off of master, the ‘writes’ branch. The writes branch was responsible for 2 things. First, it would mirror all writes to Mongo/Titan into it’s eqivalent Cassandra table. So everytime a Mongo document was saved, a corresponding row would be saved to Cassandra. Next, it would have a migration script that would copy all of our historical data for that component into Cassandra. So once the writes branch was deployed, and the migration script was run, all of our data was in both Mongo/Titan and Cassandra, and anything that was created or updated was also written to both places.

    Next, we would make a branch off of our writes branch, this was our ‘reads’ branch. The reads branch switches all reads from Mongo/Titan to our new Cassandra table(s), removed all references to Mongo/Titan for the migrated component, and stops all writes for them. In practice, this is the most complex branch to write because of minor variations in the way things come back from the different databases.

    Using this strategy, we were able to transparently run major migrations during the day, while people were using the site. People would occasionally notice that certain parts of the site got faster, but that was about it.

    If you’re thinking about doing migrations using this strategy, here are some things that will save you some time / sanity.

    • 1. Migrate only one thing at a time. It can be tempting to try to kill several birds with one stone, but you’re only adding complexity. Migrating a single component at a time makes it much easier to isolate any bugs that you’ve intoduced in your migration.

    • 2. Write both the read and write branches before deploying anything. You’ll often find things that you missed in the writes branch while writing the reads branch. Additionally, knowing that your reads branch is bulletproof before deploying the writes branch takes a lot of the pressure off. Finally, having a reads branch ready to go means that you can deploy it as soon as your migration script is done. This reduces the window in which inconsistencies between the two databases can be introduced.

    • 3. Spot check your data after deploying the writes branch. Hopefully, your unit tests cover most of your use cases, however, it’s still important to check that your data is actually being written as expected in production.


    With the migration completed, we are very happy with our approach. We didn’t have any major problems during the migration, and there was not any downtime or performace degradation. We’re also very happy with our switch to Cassandra. Our site has become much faster as a result, and it’s really easy to work with.

  7. CQLengine 0.8 Released

    We’ve just released version 0.8 of cqlengine, the Python object mapper for CQL3. Below are the new features.

    Table Polymorphism

    The big announcement for this release is the addition of support for table polymorphism. Table polymorphism allows you to read and write multiple model types to a single table. This is very useful for cases where you have multiple data types that you want to store in a single physical row in cassandra.

    For instance, suppose you want a table that stores pets owned by someone, and you wanted all the pets owned by a particular owner to appear in the same physical cassandra row, regardless of type. You would setup your model class hierarchy like this:

    class Pet(Model):
        __table_name__ = 'pet'
        owner_id = UUID(primary_key=True)
        pet_id = UUID(primary_key=True, default=uuid.uuid4)
        pet_type = Text(polymorphic_key=True)
        name = Text()
        def eat(self, food):
        def sleep(self, time):
    class Cat(Pet):
        __polymorphic_key__ = 'cat'
        cuteness = Float()
        def tear_up_couch(self):
    class Dog(Pet):
        __polymorphic_key__ = 'dog'
        fierceness = Float()
        def bark_all_night(self):

    After calling sync_table on each of these tables, the columns defined in each model will be added to the pet table. Additionally, saving Cat and Dog models will save the instances to the Pet table with the meta data needed to identify each row as either a cat or dog.

    Next, let’s create some rows:

    owner_id = uuid.uuid4()
    Cat.create(owner_id=owner_id, name='fluffles', cuteness=100.1)
    Dog.create(owner_id=owner_id, name='destructo', fierceness=5000.001)

    Now if we query the Pet table for pets owned by our owner id, we will get a Dog instance, and a Cat instance:

    print list(Pet.objects(owner_id=owner_id))
    [<Dog name='destructo'>, <Cat name='fluffles'>]

    Note that querying one of the sub types like:


    will raise an exception if the query returns a type that’s not a subclass of Dog.

    Normally, you should perform queries from the base class, in this case Pet. However, if you do want the ability to query a table for only objects of a particular sub type, like Dog, set the polymorphic_key column to indexed.
    When the polymorphic key column is indexed, queries against subtypes like Dog will automatically add a WHERE clause to the query that filters out other subtypes.

    To properly setup a polymorphic model structure, you do the following:

    1. Create a base model with a column set as the polymorphic_key (set polymorphic_key=True in the column definition)
    2. Create subclass models, and define a unique __polymorphic_key__ value on each
    3. Run sync_table on each of the sub tables

    About the polymorphic key

    The polymorphic key is what cqlengine uses under the covers to map logical cql rows to the appropriate model type. The base model maintains a map of polymorphic keys to subclasses. When a polymorphic model is saved, this value is automatically saved into the polymorphic key column. You can set the polymorphic key column to any column type that you like, with the exception of container and counter columns, although Integer columns make the most sense.

    VarInt column

    Thanks to a pull request from Tommaso Barbugli, cqlengine now supports the varint data type, which stores integers of arbitrary size (in bytes).

    class VarIntDemo(Model):
        row_id = Integer(primary_key=True)
        bignum = VarInt()
  8. Interview with Datastax regarding our switch from MongoDB to Cassandra

    A few weeks back we were interviewed by Datastax regarding our switch from MongoDB to Cassandra, here it is.

  9. Introducing MacGyver

    Today we’re happy to announce the release of MacGyver, the duct tape and swiss army knife for AngularJS applications.

    What is it?

    MacGyver is an AngularJS module comprised of directives, filters and utilities for quickly developing your UI. It was built to meet the need for reusable components between multiple AngularJS applications.


    What does it include?

    MacGyver includes the following directives:

    • autocomplete

    • datepicker

    • file upload

    • menu

    • modal

    • spinner

    • tag autocomplete

    • tag input

    • time input

    • tooltip

    Table Directive

    The Table directive allows AngularJS developers to display and manipulate tabular data using a suite of interrelated directives. Modular in nature, these directives are designed to be extended to add additional domain specific functionality.


    To help format the data in your templates, MacGyver includes several filters, such as exposing methods in the Underscore String library, turning timestamps into human readable strings and easily pluralizing words.

    And more…

    Additional features include a utils service with useful methods to use in your controllers and custom directives, as well as a set of event directives, similar to those which will be available when AngularJS 1.2.0 is released.

    To look under the hood and get a full list of filters, events, other directives and the documentation, visit the MacGyver docs.

    Getting MacGyver

    Getting and using MacGyver is easy. You can install via Bower or download from Github.

    To install via bower, make sure you have bower installed and then run:

    bower install angular-macgyver

    Once you have MacGyver in your project, just include “Mac” as a dependency in your Angular application and you’re good to go.

    angular.module(‘myModule’, [“Mac”])
  10. Implementing a Python OAuth 2.0 Provider - Part 3 - Resource Provider

    This is the last part in a series on using pyoauth2 to create the ability for any application to use OAuth to connect with your app. The Resource Provider is a small but vital piece that ensures that application resources are sufficiently protected and granted only to correctly authenticated OAuth sessions.


    Part 1: Basics of the OAuth 2.0 Authorization Flow

    Part 2: Implementing a Python OAuth 2.0 Provider - Part 2 - Authorization Provider

    In this example, as always, we will be using the fantastic Flask, and redis for session storage. The resource provider is accessed by your session class.

    import json
    import re
    from flask import request, session
    from pyoauth2.provider import ResourceProvider, ResourceAuthorization
    from datetime import datetime
    class MyCompanyResourceAuthorization(ResourceAuthorization):
        """Subclass ResourceAuthorization to add user_id attribute."""
        user_id = None
        # Add any other parameters you would like to associate with
        # this authorization (OAuth session)
    class MyCompanyResourceProvider(ResourceProvider):
        _redis = None
        def redis(self):
            if not self._redis:
                self._redis = some_redis_connect_function()
            return self._redis
        def authorization_class(self):
            return MyCompanyResourceAuthorization
        def get_authorization_header(self):
            """Return the request Authorization header.
            :rtype: str
            return request.headers.get('Authorization')
        def validate_access_token(self, access_token, authorization):
            """Validate the received OAuth token in our unexpired tokens.
            :param access_token: Access token.
            :type access_token: str
            :param authorization: Authorization object.
            :type authorization: MyCompanyResourceAuthorization
            key = 'oauth2.access_token:%s' % access_token
            data = self.redis.get(key)
            if data is not None:
                data = json.loads(data)
                ttl = self.redis.ttl(key)
                # Set any custom data on the Authorization here
                authorization.is_valid = True
                authorization.client_id = data.get('client_id')
                authorization.user_id = data.get('user_id')
                authorization.expires_in = ttl
    resource_provider = MyCompanyResourceProvider()

    Lastly, to use the OAuth resource provider in your session class, you could set a flag on the session:

    from my_company_oauth import resource_provider
    class Session:
        def __init__(self):
            # Handle OAuth Authorization and store on the session object
            self.authorization = resource_provider.get_authorization()

    Once the OAuth authorization is set on the session object (this should happen during the initialization of the session) - it is possible to then use this to ensure that access is allowed on any particular endpoint.

    For example:

    from flask import app, session
    def some_resource():
        # Ensure that this request is made with a valid OAuth
        # access token.
        if not session.authorization.is_oauth:
            raise Exception("Not authorized")
        # Since we subclassed the Authorization class
        # we have access to all of it's properties
        user = User.find(session.authorization.user_id)
        # Return the secret answer
        return "The code is 42"

    The proper result will only be shown if the route was accessed using a valid OAuth access token. Thus the lifecycle of an authorization is complete.