Tommy Lee Software development for everyone else

Sorting between multiple Models in Ruby on Rails

Recently, I ran into a situation where I had two different Models where I needed to display both types of records in chronological order. We could grab all the records, and apply a simple sort.

  comments =
  posts =

  # Combine them together and sort.
  comments_and_posts = comments + posts
  comments_and_posts.sort! { |a, b| a.created_at <=> b.created_at }

  # Use Kaminari to paginate across
  results = Kaminari.paginate(comments_and_posts).page(params[:page])

While this is easy to read and understand, this is problematic as Comments and Posts grows. This method would require grabbing all the records before applying a sorted order. Pagination is also affected poorly, since we still need to grab both entire sets of records.

We can leverage the database to handle the sorting for us, but Rails does not have a pretty ORM way in the documentation to perform this kind of operation. We can use plain SQL.

Raw SQL Union

We use the UNION operator in SQL to combine both records together, then select from a subquery.

Since there is a possibility that ids are shared between the models, we need to generate a unique_id in these queries.

SELECT id, group_type, unique_id, created_at
  SELECT id,
        'comments' group_type,
        CONCAT('comments', id) as unique_id,
  FROM comments
  WHERE active = 1


  SELECT id,
         'posts' group_type,
         CONCAT('posts', id) as unique_id,
  FROM posts
  WHERE active = 1
) as results

ORDER BY created_at desc

We can add a LIMIT and OFFSET to use as pagination. Using a raw SQL query, we can pull out the information easily.

@connection = ActiveRecord::Base.connection
result = @connection.exec_query(sql_query)
comments_and_posts = []

result.each do |row|
  comments_and_posts << row

While we can go through each row in comments and posts and find by ID, that would give us about 25 additional queries. We can do a little extra work to reduce the amount of queries down to 2.

# Store their order in the array.
id_positions = { |x| x['unique_id'] }

items = comments_and_posts.group_by { |x| x['group_type'] }
items.each do |group_type, item|

  # Get only the ids to reduce amount of queries needed
  ids = item.collect { |x| x['id'] }

  if group_type == 'posts'
    content_items = Post.where(id: ids).to_ary
  elsif group_type == 'comments'
    content_items = Comment.where(id: ids).to_ary

  # Replace original search with new searched content
  content_items.each do |content|
    unique_id = group_type +
    content_index = id_positions.index(unique_id)
    comments_and_posts[content_index] = content


Drawbacks to Raw SQL

By diving into Raw SQL, we have direct control of what to query. For smaller queries, this is manageable.

We lose some key features, like scopes. Scopes are composable, and simplifies our understanding of queries. We would have to maintain a separate set of queries if we decide to add filtering.

Raw SQL is also reliant on having a specific flavor of SQL. If we decide to switch MySQL to Postgres, some queries would change and break.

Using Arel to access existing scopes

We’re looking to use ActiveRecord’s Arel, to improve our codebases readability. Note: Since this is a private API, it may change without warning. It is expected that Arel becomes part of public API in Rails 5.1.

We can replace the raw SQL using Arel as a Query Builder. For ActiveRecord::Relation query, we can call .arel to drop into Arel.

Rewriting SQL into Arel Components

Since Arel is used to build queries only, we’ll end up transforming the resulting query builder to a SQL String with .to_sql.

So we’re going to create each part of our query piece by piece.

comments_query =
# Strip all the SELECT choices
comments_query.projections = []

# Select our items that we need
comments_query.projections(' as id',
                    'CONCAT("comments", as unique_id',
                    '"comments" as group_type')

# comments_query.to_sql generates

#   id,
#   'comments' group_type,
#   CONCAT('comments', id) as unique_id,
#   created_at
# FROM comments
# WHERE active = 1

posts_query =
posts_query.projections = []
posts_query.projections(' as id',
                  'CONCAT("posts", as unique_id',
                  '"posts" as group_type')

# post_query.to_sql generates

#   id,
#   'posts' group_type,
#   CONCAT('posts', id) as unique_id,
#   created_at
# FROM posts
# WHERE active = 1

We can then combine both of the queries together with a UNION.

combined_query = comments_query.union(:all, posts_query)

# (#{comments_query} UNION #{posts_query})

query =, Arel.sql("#{combined_query.to_sql} as results"))

# (#{comments_query} UNION #{posts_query}) as results

Calling, sql_string) gives us the equivalent of wrapping up combined_query into a subquery.

limit = 25
offset = 0
sql = query.project(
           .order("created_at DESC")

# SELECT * FROM (#{comments_query} UNION #{posts_query}) as results
# ORDER BY created_at DESC
# LIMIT 25

results = ActiveRecord::Base.connection.select_all(sql)

Standardized Front-end Tools to Increase Productivity

One of the challenges I ran into while learning modern front-end developments and frameworks is the challenge of putting all the ideas together. While I have a few years doing back-end development, and server side languages, my experience with client-side, Javascript, and SPAs was limited.

Learning difficulties

When I played with Angular, React or Meteor back in 2014-2015, I had a lot of questions. Some involved the tooling, but many were based in how to setup a project.

  • How do I structure a project?
  • What are the recommended best practices?
  • How do I deploy?
  • How do I handle testing?
  • How do I manage importing libraries and other packages?

Lost in the forest

These Modern frameworks required a lot more planning, thought and research for me to get running. Having to development my own project structure, or having a few restructures, was frustrating.

I had a little assistance from a local Javascript Expert Chris McClean, and having fellow developers would help out in the learning process. This would be easier though for everyone if there was a simplified process in setting up and getting ready to go.

Starter Packs and Generators

One of the ways we used to start projects was using existing starter packs, or Yeoman. This gave us a better experience in generating an existing project, and allowed us to get up and get going.

Set it up for me

Having a default preference set up allows us to move onto the more important areas of our app. Starters usually fix issues like:

  • ES6 Transpiling
  • Minification and Compilation
  • Generating a Production Target
  • Wiring up the folders

Starter packs

While this made it a lot easier to build out projects, it had some issues. Some starters that exist are specific to a certain configuration. I used an older Yeoman starter for Angular, which used less recent build tools like Bower and Grunt.

It was also structured in an MVC project structure, while the community recommends a Feature based project structure.

  • Relied on the generator’s documentation
  • Could use older, not current best practices
  • Hard to debug if you don’t know what the different tools are doing.

We love Official Command Line Tools

It’s increasingly common for many web developers to use some form of command line tools to better improve productivity. In the Javascript world, it’s almost required to use npm to manage packages.

Recently, the major Javascript Frameworks have released their cli version of the tool to help create new projects, components, and build and serve. These were based on ember-cli, and allow us to start working and not worry about all the other intricacies.

I’ve been playing with Angular 2’s ng-cli, and I feel it improves my qualify of life for development. Other than generating default code, it allows for the following:

  • Well defined conventions
  • Easier support
  • keeps my mental overhead small

Conventions reduces conflict

While people argue about different kinds of conventions, many will agree that conventions are a good thing. By having a defined set of conventions for a project, people would generally know where to put things.

When the documentation is not a best practice

One of the issues I ran into while learning Angular 1 came from a lack of agreed upon conventions. While the official documentation showed one way to create directives, the community documentation and best practices recommended a separate way.

Have defaults cover most common use-case

Angular 2 took recommendations and practices from the community and have built tools that support the best practice. The issue of how you should set up your Components or Services is already made up. Calling ng generate component my-component-name creates all the steps you need.

Easier support

One of the benefits of having a standard structure in an application is easier ways to support the application later.

Problems are Google-able

When we have to create our own structure while we’re learning, lots of minor breakage and mysterious problems occasionally appear. We try to google issues, and many of the cases found are almost close to ours, but don’t work.

When standards exist, we have an understanding of what should happen, and it makes it easier to communicate.

Easier to onboard new developers

Newer people have an easier time looking up documentation, and can use the built-in tools to do accomplish tasks better. When we work on a project that has lower documentation, or using a bespoke framework, it can be difficult to comprehend what is exactly going on.

I can focus on the big things.

There are times where I feel like I’ve written the same set of code over and over again. Looked up similar documentation each time, because it was not obvious on how I should structure an area.

Standard Tools allow me to not think, and have it be easily available to do some things.

Finishing your blog posts

I have about 10-15 half-finished ideas and blog posts. Each of these ideas take off in full force! Full of passion and enthusiasm, the words and ideas come easily.

Then, somewhere around the first few hours, I lose focus on the goals of the article.

The following is a small set of ideas and strategies that I use to cope with completing blog posts, and any project.

1. Prioritize Completion

We say we’ll do it on our free time, but that free time will always be filled.

We need to make finishing these blog posts and ideas a higher priority. I remember Stephen Covey’s Quadrants for Time Management as a way to decide what’s important.

Aim for Important and Not Urgent

We should categorize items like blog post Not Urgent and Important. These should be highest priority.

Set a Goal

Goals and deadlines can give us focus on what we can accomplish in a timely manner.

When we don’t have goals or deadlines, ideas such as this will be put on the back-burner.

I aim for 1 blog post a month as a goal.

2. Write for a Single Person

Think of certain person in mind when we write a post. This person who could definitely find our article useful, informational or entertaining.

Having an audience in mind reminds us on who is waiting for the article, and the kind of information to include. We can tailor our writing to this person.

This may not accommodate everyone, but that’s okay. Let’s aim for effectiveness, rather than appeal.

Technical Articles

Technical articles are usually difficult for me to pin down.

One part is trying to justify that I know the subject. So I:

  • Write in great detail
  • Argue and illustrate the subtleties
  • Get really dry on the topic

Another Part of me is writing in a way to avoid arguments, leading to a lot of:

  • Cross chatter
  • Self Arguing
  • Showing too many sides of the argument

While I enjoy expounding all these details, and being well educated on the topic, too often, I forget who is actually reading.

For tutorials, I illustrate my way as ‘a way’ to accomplish the task, with links to references or other people’s tutorials.

For more subjective writings, I try to present the other side fair enough.

3. Overwhelmed? Rambling? Reduce Scope

When we create stuff, sometimes we want to go very deep into the nitty gritty, and talk about the finer details, and subtleties.

Now to write or finish a blog post, we have to first create the universe.

Then we have an outline that is longer than anyone wants to read.

What to cut

We should only cut down our scope if it starts being unmanageable.

We focus each section down to one purpose. Maybe the entire blog post idea should be reduced down to one idea.

4. Dedicate some time!

Currently this is about the third or fourth writing session at completing this article. I try to squeeze in these blogs on my downtime, but I love not having downtime.

One of my favorite things about free-time is that I spend a lot of that time deciding what to do with the precious time.

Part of the Schedule

The lack of scheduling, and the freedom to do anything results in a whole lot of nothing.

I work on scheduling my free-time in a more rigid schedule. After work and dinner, I have a few hours to spend.

Each week, I schedule about an hour or two for the blogging and writing, and no more. Any more time spent might lead to increased distractions and slacking off for me.

No Confidence? Scared of Completing?

We worry too much about the opinions of others and their negative attitudes. We let the invisible hating peanut gallery crawl into our brain, and shake up our attempts at anything.

Then we sit around, with nothing on the screen, paralyzed by the fear of our own inadequacies. We feel that we’re not good enough, that everyone will rip us to shreds, and the best course of action is to do nothing.

Then the existential dread kicks in, and life spirals out of control, due to us trying to write a blog article.

Well, probably. That might happen.

An Optimistic Approach

Being frightened is okay. It’s something that never really goes away, but we sort of get used to the feeling of being uncomfortable. Like learning a new language, trying something new on, the natural anxiousness that arises, that’s normal.

Hey, you’re doing it. That’s a lot more to say than a lot of others. You gave yourself a challenge, and you’re fighting your way through it. That’s awesome. Look how far you came, this was a blank space before.

For me, I can have a fully rendered page and say ‘Hey I made this!’ That’s really the prize is the completion of the long road ahead.

There will be people out there that haven’t heard of your information, and will look to your piece of information as useful. It may not be until a very long time from now, but that possibility exists.

Your work will be there for someone’s One in 10,000.

A Pessimist Approach

We may have put our work too high on the pedestal. We’re so concerned about the criticism, we forget about being realistic. There’s so much content out there, what will guarantee that people read it? Most people will give it a passing glance. At least someone’s reading.

When was the last time you felt so impassioned over an internet article or blog post that you wrote a scathing tirade, attacking everything about the person? Well, most people don’t have really strong opinions about it. 90% of people just read it and move on.

The angry critics reflect more on themselves then your actual writing. This blog post on online harrassment reached out to me on understanding the issues I run into with comments online.

For me, I know that the only people who are going to read this blog post are those that I share it with, and maybe a few bot scrapes.

Whatever Keeps you writing

The important thing to take away is to do whatever you need to get these projects complete.

Tips for Having a great Hackathon Experience

I recently participated in a few hackathons and hack nights this year. They are a great experience for developing a lot of interesting ideas in a short amount of times. It’s extremely rewarding to get a product out the door over a weekend, or even a night.

Some companies have made hackathons feel extremely exploitative. They’ll stuff a bunch of developers in a room, with free pizza and some prizes and they get to keep the results of your work.

The next time you see an interesting event to participate in, please consider these in mind.

Find a hackathon that put you first

Choose the ones that support your rights. We work hard for our projects, and they belong to us. A lot of issues arise due to some of these problems

Talk with your Local meetups and user groups for recommendations

Each of the ones I chose to participate had the following criteria:

  • The participant keeps all rights to the code they produce
  • The event has been vetted out by credible people with a good track record of support.

Finding a credible event can be difficult. Talk with your Local meetups and user groups for recommendations. In Orlando, we have a few tech advocates that go out of their way to check on these.

Once you find a decent hackathon that would benefit you, it’s time to prepare yourself!

Have a goal in mind

Having achievable expectations allows us to feel successful at these events.

Fellow hackathon team member Mick Musak recommended to set my expectations at the start the Hack the Arena event.

I chose a goal of just completing an app, and learned a bit more about how Web Sockets worked in Node.js.

I brought my friend Timothy Findling, who is very strong at a data-oriented project. He had very limited experience with modern Javascript and web apps. His goal was to receive free swag.

Keeps your mind grounded

At the Hack the Arena event, there were multiple groups that had similar ideas. Our idea of a digital light scheme was done on a hardware level, and I felt eclipsed. I messed up on the presentation, and felt really embarrassed.

Seeing that I was upset, Team members, Mick and Tim, told me their goals were fulfilled. They were proud to participate, and was happy to have the project on their githubs.

By setting the expectation, they were satisfied.

Here’s a compilation list of my goals at the past hackathons

  • Learn React
  • Learn React Native
  • Build a small fun game that I would play
  • Create an Android App
  • Don’t freak out when presenting the idea
  • Get a T-Shirt

Use your B-tier ideas

If you have concerns about being exploited for your brilliance and hard work, I have a great idea, well, a mediocre idea.

Save the best ideas for yourself, and use one of the throwaway ideas.

You’re a person who can come up with a bunch of ideas, and they’re cheap to think up. It’s the implementation that takes effort.

Leave enough for the imagination

Some may disagree with this idea, and that’s okay.

Use the Throwaway Jokes

I use my time at Hackathons to learn new things, or build a slightly interesting idea. This reduces the amount of personal risk to my ego when my awesome idea gets shot down due to poor execution.

This is where I pull out a list of joke ideas. In fact, a few local winners of hackathons have won with an X for Y product clone. These ideas are great for learning new tech.

Build for your audience

It’s really easy to try to do everything in a project for the hackathon. When we plan on a regular project, we try to get all the different features needed for the idea to work. On a time constraint, a lot of features are unnecessary.

Presentation Oriented Design

Many of the projects available relies on having the 3-5 minutes to present your project. Spend a fair amount of time designing what the screens would look like, and the basic flow of the idea.

This allows us to skip certain ‘crucial’ elements of an application, like authentication, or server communications.

Time is the ultimate enemy

Plan on creating the smallest MVP you can imagine, then make it smaller. Depending on the time of the hackathon, a lot of items are unnecessary.

Reduce the scope just enough to get your point across

OAuth2 Social Media Authentication for an instagram clone, don’t need it.

An entire backend server to process data, you might not need it.

On projects where I’m learning something completely fresh, or a lot of members with little experience, I want something super small and simple.

You’ll want to reduce the scope just enough to get your point across.

Keep your heart light

Tensions get high for competitors, trying to push out an idea as soon as possible. Remember to laugh, and believe that people are doing their best.

Environmental Variables for AngularJS

When it came time to setup deploys for our AngularJS application, we had a few areas in our Services that had hard-coded API keys into the application. A lot of guides online have different ways of incorporating ng-constant into the build process. At Fresh Lines, I work on a lot of server-side projects. I’m comfortable with a standard convention for my configurations, like a /config folder to store environment based settings.

This particular setup is based on onefootball’s guide, and mirrors closely to how some server side frameworks have handled their environmental variables.

Intended Usage

I want to have a nice config file, that lists all my environment specific settings, like

    "business": {
        "url": ""

and in config.production.json.

        "business": {
            "url": ""

then have the ability to inject the configuration into places where I need it.

var app = angular.module('brandtinker.businessFactory',['config']);

function BusinessFactory($http, $q, ENV) {
  var url =;


  • An AngularJS Application
  • A (Gruntfile)gruntjs that works

Folder Structure

The project folders would be set up so that we have a separate config folder to hold all of the private information:


In this config folder, we’ll have the following files. These files represent the different environments we want.


Installation Steps

Install grunt-ng-constant and lodash to your local project directory

    npm install grunt-ng-constant lodash --save-dev

(optional) If you’re using grunt-jit: Add grunt-ng-constants to your grunt-jit.

require('jit-grunt')(grunt, {
  useminPrepare: 'grunt-usemin',
  ngtemplates: 'grunt-angular-templates',
  cdnify: 'grunt-google-cdn',
  ngconstant: 'grunt-ng-constant'

We need to Add the Configuration JSON files to the Gruntfile. Add these lines to your Gruntfile.js. This should be at or near the top of the file. If you’re Gruntfile is not at the root of your project, adjust conf1 and conf2 to match. lodash is included to merge both JSON files together

var _ = require('lodash');

// Load the config file matching the 'profile' parameter, returns the default config + values from that file.
var buildConfig = function (profile) {
var conf1 = './config/config.json';
var conf2 = './config/config.' + profile + '.json';
  if (!grunt.file.exists(conf2)) {'File ' + conf2 + ' doesn\'t exist.');

  return _.merge(

Add the following grunt task inside your grunt.initConfig. This sets up the configurations to read the buildConfig and set it to an ENV file.

Replace the dest: path with you keep your scripts in your project. I only have a production and development environment.

ngconstant: {
  // Options for all targets
  options: {
    space: '  ',
    wrap: '"use strict";\n\n {%= __ngModule %}',
    name: 'config',
  // Environment targets
  development: {
    options: {
      dest: 'app/scripts/config.js'
    constants: {
      ENV: buildConfig('development')
  production: {
    options: {
      dest: 'app/scripts/config.js'
    constants: {
      ENV: buildConfig('production')

Finally, in the build and serve tasks of Grunt, we’ll need to call ngconstant to our setup. grunt build is for production, and grunt serve for development. I would add this task as the first or second task to be run

grunt.registerTask('build', [

And for grunt serve

grunt.registerTask('serve', 'Compile then start a connect web server', function (target) {
  if (target === 'dist') {
    return['build', 'connect:dist:keepalive']);

Then for every module where you would use this, include config as a dependency, and inject ENV

var app = angular.module('brandtinker.businessFactory',['config']);

function BusinessFactory($http, $q, ENV) {
  var url =;

Credits goes to onefootball for this setup.