The code for Alexandria is pretty much ready. This chapter is all about adding small improvements and discussing things like versioning and caching. In the next chapter, we will deploy our application. so we need to make sure it’s production ready by the end of this chapter.
The first thing we are going to do is ensure that compression is enabled. Sending data in clear instead of compressing them is a mistake that will slow down all our requests. It’s so easy to use the compression mechanism of HTTP that we have no excuses not to do it.
We could setup the web server (Apache or Nginx, for example), to handle it for us. I prefer to show you how to do it directly in the Rails application though, since we will be deploying to Heroku and it’s the simplest option.
First, let’s see what happens if I ask the server to send the content encoded with gzip
.
Start the server.
rails s
Make the following curl
request. We are only going to output the size of the response with size_download
.
Note how we are using the Accept-Encoding
header to send the gzip
compression.
curl -w 'Size: %{size_download} bytes\n' -o /dev/null \
-H "Accept-Encoding: gzip" \
-H "Authorization: Alexandria-Token api_key=1:my_api_key" \
http://localhost:3000/api/books?sort=id&dir=asc
Output
Size: 3882 bytes
So that’s our baseline. Even though we specified Accept-Encoding:gzip
, the server didn’t encode the data.
To implement responses compression, we are going to use a rack
middleware named Rack::Deflater
. Let’s add it to the config/application.rb
file.
# config/application.rb
# Hidden Code
module Alexandria
class Application < Rails::Application
config.load_defaults 5.2
config.api_only = true
config.filter_parameters += [:cover]
config.middleware.use Rack::Deflater
end
end
Restart the application to activate the middleware.
rails s
Let’s make the same request again.
curl -w 'Size: %{size_download} bytes\n' -o /dev/null \
-H "Accept-Encoding: gzip" \
-H "Authorization: Alexandria-Token api_key=1:my_api_key" \
http://localhost:3000/api/books?sort=id&dir=asc
Output
Size: 773 bytes
Wow! More than 4 times smaller, with one line of code! But what did the server sent back exactly? Let’s checkout the body of the response.
curl -H "Accept-Encoding: gzip" \
-H "Authorization: Alexandria-Token api_key=1:my_api_key" \
http://localhost:3000/api/books?sort=id&dir=asc
Output
l�`W���n� �Wa��MS�1��R���&M�ѤNSDl����K���pR
Well, that looks compressed indeed. Note that if we want to correctly uncompress these data with curl
, we need to use the –compressed
option.
curl -H "Accept-Encoding: gzip" \
-H "Authorization: Alexandria-Token api_key=1:my_api_key" \
-i --compressed http://localhost:3000/api/books?sort=id&dir=asc
Output
HTTP/1.1 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Link:
<http://localhost:3000/api/books?dir=asc&page=2&per=10&sort=id>; rel="next",
<http://localhost:3000/api/books?dir=asc&page=100&per=10&sort=id>; rel="last"
Content-Type: application/json; charset=utf-8
Vary: Accept-Encoding
Content-Encoding: gzip
ETag: W/"4eae1df00a7f0b27f54d37f92f83e41e"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: d80be6ff-7f09-4be0-b278-de7bc2676b44
X-Runtime: 0.023354
Transfer-Encoding: chunked
{"data":[{"id":1,"title":"Ruby Under a Microscope"...
Tadaa! That’s it for compression. Before we proceed, run the tests, just to be safe.
rspec
...
Finished in 15.14 seconds (files took 4.29 seconds to load)
197 examples, 0 failures
Looks good - now let’s talk about caching.
We’ve talked about it before: caching is a great way to speed up your application and be able to handle more users with fewer servers. In this section, we will check how we can implement client caching and server caching.
Before we do anything, let’s run some manual tests. First, start the Rails server.
rails s
Then send the following curl
request.
curl -i -H "Authorization: Alexandria-Token api_key=1:my_api_key" \
http://localhost:3000/api/books?sort=id&dir=asc
Output
HTTP/1.1 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Link:
<http://localhost:3000/api/books?dir=asc&page=2&per=10&sort=id>; rel="next",
<http://localhost:3000/api/books?dir=asc&page=100&per=10&sort=id>; rel="last"
Content-Type: application/json; charset=utf-8
Vary: Accept-Encoding
ETag: W/"1758f14feb7e6bbed055ac61268992a5"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: 46dbbab3-991d-4cc8-a746-aade670b544b
X-Runtime: 0.191837
Transfer-Encoding: chunked
In the response, you can see the ETag
and Cache-Control
headers that Rails automatically included. We can use the ETag
value to make a conditional HTTP request with If-None-Match
.
curl -i -H "Authorization: Alexandria-Token api_key=1:my_api_key" \
-H 'If-None-Match: W/"1758f14feb7e6bbed055ac61268992a5"' \
http://localhost:3000/api/books?sort=id&dir=asc
Output
HTTP/1.1 304 Not Modified
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Link:
<http://localhost:3000/api/books?dir=asc&page=2&per=10&sort=id>; rel="next",
<http://localhost:3000/api/books?dir=asc&page=100&per=10&sort=id>; rel="last"
Vary: Accept-Encoding
ETag: W/"1758f14feb7e6bbed055ac61268992a5"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: daf998a4-d177-44d4-8170-9ecfccc94193
X-Runtime: 0.026049
And we are correctly getting a 304 Not Modified
back! It seems we don’t have much to do to set up client caching, since it seems Rails is taking care of it for us.
Server caching is a different story. We need to ensure that the outputs of expensive operations are cached, especially for things like the book list. Most users will spend a good amount of time browsing them. The list of books available is also not going to change constantly, making it one more reason to cache the representations.
We will go through a few different parts of Alexandria and add a caching mechanism to reduce communication with the database and expensive JSON
generation.
Before doing anything, let’s run some tests to see how fast our API is responding. We will use the Apache Bench (Ab) tool for this.
Ab is already installed by default on Mac OS X.
Install Ab
on Debian/Ubuntu:
apt-get install apache2-utils
With the following command, we are specifying that we want to send 1000 requests with 10 concurrent requests.
ab -n 1000 -c 10 -H "Authorization: Alexandria-Token api_key=1:my_api_key" \
http://127.0.0.1:3000/api/books?sort=id&dir=asc
There seems to be a bug when using localhost
on Mac OS X, therefore, I’ve switched to 127.0.0.1
.
Output
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:
Server Hostname: localhost
Server Port: 3000
Document Path: /api/books
Document Length: 3282 bytes
Concurrency Level: 10
Time taken for tests: 38.023 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 3765000 bytes
HTML transferred: 3282000 bytes
Requests per second: 26.30 [#/sec] (mean)
Time per request: 380.233 [ms] (mean)
Time per request: 38.023 [ms] (mean, across all concurrent requests)
Transfer rate: 96.70 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 2
Processing: 194 379 134.7 345 1078
Waiting: 189 377 134.2 343 1075
Total: 194 379 134.7 345 1078
Percentage of the requests served within a certain time (ms)
50% 345
66% 393
75% 419
80% 445
90% 563
95% 666
98% 770
99% 878
100% 1078 (longest request)
Damn, that’s not great. We really need to improve this.
Before we do anything, let’s tell Rails we want to use caching in development. As you can see in the configuration file below, caching is disabled by default. We can enable it by creating a new file in the tmp
folder.
# config/environments/development.rb
# Hidden Code
# Enable/disable caching. By default caching is disabled.
if Rails.root.join('tmp', 'caching-dev.txt').exist?
config.action_controller.perform_caching = true
config.cache_store = :memory_store
config.public_file_server.headers = {
'Cache-Control' => "public, max-age=#{2.days.to_i}"
}
else
config.action_controller.perform_caching = false
config.cache_store = :null_store
end
# Hidden Code
Create the needed file with the command below, and caching will be enabled in the development
environment.
rails dev:cache
Let’s also enable caching in testing. null_store
is a fake caching store that will allow Rails caching to work, but won’t actually cache anything.
# config/environments/test.rb
Rails.application.configure do
# Hidden Code
config.action_mailer.perform_caching = true
config.cache_store = :null_store
# Hidden Code
end
With caching configured, we can now write some code.
Let’s see what happens when we make a basic request to get a list of books.
curl -H "Authorization: Alexandria-Token api_key=1:my_api_key" \
http://localhost:3000/api/books?sort=id&dir=asc
Output
Processing by BooksController#index as */*
Parameters: {"sort"=>"id", "dir"=>"asc"}
ApiKey Load (0.8ms) SELECT "api_keys".* FROM "api_keys" WHERE
"api_keys"."active" = $1 AND "api_keys"."id" = $2 LIMIT $3 [["active",
true], ["id", 1], ["LIMIT", 1]]
(0.7ms) SELECT COUNT(*) FROM "books"
Book Load (0.6ms) SELECT "books".* FROM "books" ORDER BY id asc
LIMIT $1 OFFSET $2 [["LIMIT", 10], ["OFFSET", 0]]
Completed 200 OK in 99ms (Views: 0.3ms | ActiveRecord: 7.4ms)
There are many queries being made. With caching, we can reduce these and speed up the response.
Every request will send a query to the database to check the API key… Same thing with access tokens. We can probably improve that by caching those for a while. We are going to work in the Authentication
module first.
To cache things, we are simply going to use the Rails.cache
feature. With the fetch
method, it will either retrieve from the cache if available, or store it if it’s not there yet.
# app/controllers/concerns/authentication.rb
module Authentication
extend ActiveSupport::Concern
# Hidden Code
private
def validate_auth_scheme # Hidden Code
def authenticate_client # Hidden Code
def authenticate_user # Hidden Code
def unauthorized! # Hidden Code
def authorization_request # Hidden Code
def authenticator # Hidden Code
def api_key
@api_key ||= -> do
key = "api_keys/#{authenticator.credentials['api_key']}"
Rails.cache.fetch(key, expires_in: 24.hours) do
authenticator.api_key
end
end.call
end
def access_token # Hidden Code
def current_user # Hidden Code
end
Since we need to access the credentials
method to build the key, let’s move that method out of the private
section in the Authenticator
class.
# app/services/authenticator.rb
class Authenticator
include ActiveSupport::SecurityUtils
def initialize # Hidden Code
def credentials
@credentials ||= Hash[@authorization.scan(/(\w+)[:=] ?"?([\w|:]+)"?/)]
end
def api_key # Hidden Code
def access_token # Hidden Code
private
def secure_compare_with_hashing # Hidden Code
end
Caching the API key like this is not perfect but it’s fine for our simple needs. If we manually disable an API key, we can either wait for a day for it to be cleared from the cache, or remove it from there manually.
We could follow a similar approach for access tokens. It would be a bit more complicated, since we don’t want to store the access tokens in clear. We would have to ask BCrypt
to hash it before creating the caching key. We are not going to implement this.
What happens now when we make the same request as before?
curl -H "Authorization: Alexandria-Token api_key=1:my_api_key" \
http://localhost:3000/api/books?sort=id&dir=asc
Started GET "/api/books?sort=id&dir=asc" for ::1 at 2016-07-09 16:59:31 +0700
Processing by BooksController#index as */*
Parameters: {"sort"=>"id", "dir"=>"asc"}
ApiKey Load (0.4ms) SELECT "api_keys".* FROM "api_keys" WHERE
"api_keys"."active" = $1 AND "api_keys"."id" = $2 LIMIT $3 [["active",
true], ["id", 1], ["LIMIT", 1]]
(0.6ms) SELECT COUNT(*) FROM "books"
Book Load (0.5ms) SELECT "books".* FROM "books" ORDER BY id asc
LIMIT $1 OFFSET $2 [["LIMIT", 10], ["OFFSET", 0]]
Completed 200 OK in 91ms (Views: 0.2ms | ActiveRecord: 10.8ms)
Started GET "/api/books?sort=id&dir=asc" for ::1 at 2016-07-09 16:59:38 +0700
Processing by BooksController#index as */*
Parameters: {"sort"=>"id", "dir"=>"asc"}
(0.4ms) SELECT COUNT(*) FROM "books"
Book Load (0.5ms) SELECT "books".* FROM "books" ORDER BY id asc
LIMIT $1 OFFSET $2 [["LIMIT", 10], ["OFFSET", 0]]
Completed 200 OK in 10ms (Views: 0.2ms | ActiveRecord: 0.9ms)
The first time, the database will be accessed to check the API key. After that, and for 24 hours, the API will just use the cache saving some precious milliseconds.
We are still making two requests to the database, one to count the total number of books and another one to load the books themselves.
Counting records is slow, especially when you have millions. Pagination gems usually need this to get the last page which is something we use to generate the Link
header. That means we need to keep the count. But do we really need to make it in every request? The book count is not going to change often, so caching it can save a huge amount of time.
If we had millions of books, we should actually count them in the background and update the cache. For our current application however, we can just lazy-load the count and use the updated_at
field of the last updated book to invalidate the key.
The actual count is handled by Kaminari
so, for simplicity, we are only going cache the result of two Kaminari
methods that rely on counting records: last_page?
and total_page
.
# app/query_builders/paginator.rb
class Paginator
def initialize # Hidden Code
def paginate # Hidden Code
def links # Hidden Code
private
def validate_param! # Hidden Code
def pages
@pages ||= {}.tap do |h|
h[:first] = 1 if show_first_link?
h[:prev] = @scope.current_page - 1 if show_previous_link?
h[:next] = @scope.current_page + 1 if show_next_link?
h[:last] = total_pages if show_last_link?
end
end
def show_first_link?
total_pages > 1 && !@scope.first_page?
end
def show_previous_link?
!@scope.first_page?
end
def show_next_link?
last_page?
end
def show_last_link?
total_pages > 1 && last_page?
end
def last_page?
return true unless last_updated_at
key = "qb/p/#{@scope.model}/#{last_updated_at.to_datetime}/last_page?"
Rails.cache.fetch(key) do
!@scope.last_page?
end
end
def total_pages
return 1 unless last_updated_at
key = "qb/p/#{@scope.model}/#{last_updated_at.to_datetime}/count"
Rails.cache.fetch(key) do
@scope.total_pages
end
end
def last_updated_at
@last_updated_at ||= @scope.unscoped
.order('updated_at DESC').first.try(:updated_at)
end
end
How are things now? Let’s figure out by running the command below.
curl -H "Authorization: Alexandria-Token api_key=1:my_api_key" \
http://localhost:3000/api/books?sort=id&dir=asc
Output
Started GET "/api/books" for 127.0.0.1 at 2016-06-18 18:52:25 +0700
Processing by BooksController#index as */*
Book Load (0.9ms) SELECT "books".* FROM "books" ORDER BY
updated_at DESC LIMIT $1 OFFSET $2 [["LIMIT", 1], ["OFFSET", 0]]
(0.4ms) SELECT COUNT(*) FROM "books"
Book Load (0.4ms) SELECT "books".* FROM "books" LIMIT $1 OFFSET $2
[["LIMIT", 10], ["OFFSET", 0]]
Completed 200 OK in 45ms (Views: 10.6ms | ActiveRecord: 4.3ms)
Started GET "/api/books" for 127.0.0.1 at 2016-06-18 18:52:32 +0700
Processing by BooksController#index as */*
Book Load (0.8ms) SELECT "books".* FROM "books" ORDER BY
updated_at DESC LIMIT $1 OFFSET $2 [["LIMIT", 1], ["OFFSET", 0]]
Book Load (0.3ms) SELECT "books".* FROM "books" LIMIT $1 OFFSET $2
[["LIMIT", 10], ["OFFSET", 0]]
Completed 200 OK in 7ms (Views: 3.7ms | ActiveRecord: 1.1ms)
We replaced the expensive count
query with a query getting the last updated book. Now, we need to get rid of the query loading the books!
We are going to cache representations on multiple levels. To be able to cache anything though, we need to build smart cache keys. Clients can send a lot of parameters to Alexandria to fine-tune exactly the representations they want so we need to use all those parameters in our keys.
The first thing we need to do is update the BasePresenter
class to have access to the list of fields
and embeds
that were requested by the client. We also want to add a new class instance variable named @cached
that will allow us to configure which model/presenter we want to cache.
# app/presenters/base_presenter.rb
class BasePresenter
include Rails.application.routes.url_helpers
# Define a class level instance variable
CLASS_ATTRIBUTES = {
build_with: :build_attributes,
related_to: :relations,
sort_by: :sort_attributes,
filter_by: :filter_attributes
}
CLASS_ATTRIBUTES.each { |k, v| instance_variable_set("@#{v}", []) }
class << self
attr_accessor *CLASS_ATTRIBUTES.values
CLASS_ATTRIBUTES.each do |k, v|
define_method k do |*args|
instance_variable_set("@#{v}", args.map(&:to_s))
end
end
def cached
@cached = true
end
def cached?
@cached
end
end
attr_accessor :object, :params, :data
def initialize(object, params, options = {})
@object = object
@params = params
@options = options
@data = HashWithIndifferentAccess.new
end
def as_json(*)
@data
end
def build(actions)
actions.each { |action| send(action) }
self
end
# To build the cache key, we need the list of requested fields
# sorted to make it reusable
def validated_fields
@fields_params ||= field_picker.fields.sort.join(',')
end
# Same for embeds
def validated_embeds
@embed_params ||= embed_picker.embeds.sort.join(',')
end
def fields
@fields ||= field_picker.pick
end
def embeds
@embeds ||= embed_picker.embed
end
private
def field_picker
@field_picker ||= FieldPicker.new(self)
end
def embed_picker
@embed_picker ||= EmbedPicker.new(self)
end
end
Next, let’s update the Serializer
. We will start relying on two new serializers, CollectionSerializer
and EntitySerializer
, to remove responsibilities from the Serializer
class.
At the same time, we add a very important feature to the serializer: caching the generated JSON
. We will rely on the collection and entity serializers to provide the key and simply append /json
to it. We could easily support more formats like this by simply using /xml
, for example.
# app/serializers/alexandria/serializer.rb
module Alexandria
class Serializer
def initialize(data:, params:, actions:, options: {})
@data = data
@params = params
@actions = actions
@options = options
@serializer = @data.is_a?(ActiveRecord::Relation) ? collection_serializer :
entity_serializer
end
def to_json
# We skip caching if the presenter is not configured for it
return data unless @serializer.cache?
Rails.cache.fetch("#{@serializer.key}/json", { raw: true }) do
data
end
end
private
def data
{ data: @serializer.serialize }.to_json
end
def collection_serializer
CollectionSerializer.new(@data, @params, @actions)
end
def entity_serializer
presenter_klass = "#{@data.class}Presenter".constantize
presenter = presenter_klass.new(@data, @params, @options)
EntitySerializer.new(presenter, @actions)
end
end
end
Now create two new files for the collection and entity serializers.
touch app/serializers/alexandria/collection_serializer.rb \
app/serializers/alexandria/entity_serializer.rb
Here is the new collection serializer.
# app/serializers/alexandria/collection_serializer.rb
module Alexandria
class CollectionSerializer
def initialize(collection, params, actions)
@collection = collection
@params = params
@actions = actions
end
def serialize
return @collection unless @collection.any?
return build_data
end
def key
# We hash the key using SHA1 to reduce its size
@key ||= Digest::SHA1.hexdigest(build_key)
end
def cache?
presenter_class.cached?
end
private
def build_data
@collection.map do |entity|
presenter = presenter_class.new(entity, @params)
EntitySerializer.new(presenter, @actions).serialize
end
end
def presenter_class
@presenter_class ||= "#{@collection.model}Presenter".constantize
end
# Building the key is complex. We need to take into account all
# the parameters the client can send.
def build_key
last = @collection.unscoped.order('updated_at DESC').first
presenter = presenter_class.new(last, @params)
updated_at = last.try(:updated_at).try(:to_datetime)
cache_key = "collection/#{last.class}/#{updated_at}"
[:sort, :dir, :page, :per, :q].each do |param|
cache_key << "/#{param}:#{@params[param]}" if @params[param]
end
if presenter.validated_fields.present?
cache_key << "/fields:#{presenter.validated_fields}"
end
if presenter.validated_embeds.present?
cache_key << "/embeds:#{presenter.validated_embeds}"
end
cache_key
end
end
end
The EntitySerializer
class is pretty similar to that of the CollectionSerializer
class. The main difference is that we can skip the query builder parameters when creating the key.
# app/serializers/alexandria/entity_serializer.rb
module Alexandria
class EntitySerializer
def initialize(presenter, actions)
@presenter = presenter
@entity = @presenter.object
@actions = actions
end
def serialize
return @presenter.build(@actions)
end
def key
@key ||= Digest::SHA1.hexdigest(build_key)
end
def cache?
@presenter.class.cached?
end
private
def build_key
updated_at = @entity.updated_at.to_datetime
cache_key = "model/#{@entity.class}/#{@entity.id}/#{updated_at}"
if @presenter.validated_fields.present?
cache_key << "/fields:#{@presenter.validated_fields}"
end
if @presenter.validated_embeds.present?
cache_key << "/embeds:#{@presenter.validated_embeds}"
end
cache_key
end
end
end
The last step is enabling the presenters. We are only going to enable the BookPresenter
, so I trust you can add it to the other presenters yourself.
# app/presenters/book_presenter.rb
class BookPresenter < BasePresenter
cached
build_with :id, :title, :subtitle, :isbn_10, :isbn_13, :description,
:released_on, :publisher_id, :author_id, :created_at, :updated_at,
:cover, :price_cents, :price_currency
related_to :publisher, :author
sort_by :id, :title, :released_on, :created_at, :updated_at, :price_cents,
:price_currency
filter_by :id, :title, :isbn_10, :isbn_13, :released_on, :publisher_id,
:author_id, :price_cents, :price_currency
def cover
path = @object.cover.url.to_s
path[0] = '' if path[0] == '/'
"#{root_url}#{path}"
end
end
Before we try the Apache Bench tool again, there is one more optimization we can do. Let’s make the JSON
generation faster with an optimized gem like Oj (Optimized Json).
Add the gem to your Gemfile
…
# Gemfile
source 'https://rubygems.org'
git_source(:github) { |repo| "https://github.com/#{repo}.git" }
ruby '2.5.0'
gem 'rails', '5.2.0'
gem 'pg'
gem 'puma', '~> 3.11'
gem 'bootsnap', '>= 1.1.0', require: false
gem 'carrierwave'
gem 'carrierwave-base64'
gem 'pg_search'
gem 'kaminari'
gem 'bcrypt', '~> 3.1.7'
gem 'pundit'
gem 'money-rails', '1.11.0'
gem 'stripe'
gem 'oj'
# Hidden Code
and install it with bundle
.
bundle install
It’s time to use ab
again to see how our API is performing now.
Restart the server…
rails s
and run the following Apache Bench test.
ab -n 1000 -c 10 -H "Authorization: Alexandria-Token api_key=1:my_api_key" \
http://127.0.0.1:3000/api/books?sort=id&dir=asc
Output
Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 3000
Document Path: /api/books?sort=id&dir=asc
Document Length: 3882 bytes
Concurrency Level: 10
Time taken for tests: 5.939 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 4302000 bytes
HTML transferred: 3882000 bytes
Requests per second: 168.37 [#/sec] (mean)
Time per request: 59.392 [ms] (mean)
Time per request: 5.939 [ms] (mean, across all concurrent requests)
Transfer rate: 707.36 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 12 57 10.2 56 174
Waiting: 12 57 10.2 56 173
Total: 13 58 10.2 57 174
Percentage of the requests served within a certain time (ms)
50% 57
66% 60
75% 63
80% 65
90% 70
95% 74
98% 81
99% 89
100% 174 (longest request)
Wow, that’s so much better, right? Caching really saved us from slow requests.
Let’s run the tests to see if we broke something.
rspec
Success (GREEN)
...
Finished in 15.11 seconds (files took 4.08 seconds to load)
323 examples, 0 failures
We could improve this setup even further by adding an index to all the updated_at
columns in order to make the SQL queries validating the cache faster.
The Alexandria API should work with JavaScript applications running in a browser. However, this can be a problem since browsers prevent JS requests from going to other domain names with the Cross-Origin Resource Sharing policy.
Let’s give it a try.
mkdir frontend && touch frontend/index.html
Put the following code in this new file. This will help us test the CORS setup in Alexandria.
<!-- frontend/index.html -->
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Alexandria</title>
</head>
<body>
<div id="books"></div>
<script src="http://code.jquery.com/jquery-latest.min.js"></script>
<script type="text/javascript">
$.ajax({
url: "http://localhost:3000/api/books",
headers: {
"Authorization": "Alexandria-Token api_key=1:my_api_key"
},
success : function(response) {
console.log("Response", response);
$("#books").text(JSON.stringify(response));
}
})
</script>
</body>
</html>
We are not going to run it on a web server which means we will need to allow all origins with *
. Once we deploy our API, we can change the allowed origins.
Start the server.
rails s
Open the frontend/index.html
file in your browser. Open the console and you should see something like what is shown in Figure 1.
Obviously, our API is not allowing this small script to interact with it. Let’s fix it by setting up CORS in Alexandria.
Add the rack-cors
gem to your Gemfile
.
# Gemfile
source 'https://rubygems.org'
git_source(:github) { |repo| "https://github.com/#{repo}.git" }
ruby '2.5.0'
gem 'rails', '5.2.0'
gem 'pg'
gem 'puma', '~> 3.11'
gem 'bootsnap', '>= 1.1.0', require: false
gem 'carrierwave'
gem 'carrierwave-base64'
gem 'pg_search'
gem 'kaminari'
gem 'bcrypt', '~> 3.1.7'
gem 'pundit'
gem 'money-rails', '1.11.0'
gem 'stripe'
gem 'oj'
gem 'rack-cors', :require => 'rack/cors'
# Hidden Code
Get it installed.
bundle install
Update the cors.rb
initializer that should already be in your Rails application.
# config/initializers/cors.rb
Rails.application.config.middleware.insert_before 0, Rack::Cors do
allow do
origins '*'
resource '*',
headers: :any,
methods: [:get, :post, :patch, :delete, :options, :head], expose: ['Link']
end
end
That should be enough to make our JavaScript script work.
Restart the server.
rails s
Reload the HTML page in your browser. We can now see that the requests are successful (Figure 2 & Figure 3).
Note that the origin is ‘null’ because we opened the index.html
file from the filesystem instead of using a web server.
We can improve this by using an environment variable to store the allowed origins.
# config/env.rb
ENV['STRIPE_API_KEY'] = 'YOUR_API_KEY'
ENV['ALLOWED_CORS_ORIGINS'] = '*'
Now we just need to change the CORS
initializer to use this environment variable.
# config/initializers/cors.rb
Rails.application.config.middleware.insert_before 0, Rack::Cors do
allow do
origins *ENV.fetch('ALLOWED_CORS_ORIGINS').split(',')
resource '*',
headers: :any,
methods: [:get, :post, :patch, :delete, :options, :head], expose: ['Link']
end
end
Try reloading the HTML page again and you should not have any issue.
We now support CORS requests!
We have many options to version our API. If you’ve read Chapter 7 in the first module, you know that the “right” way of doing is to follow REST and the web standards and to version the media type instead of versioning resources.
Sadly, Alexandria is not currently RESTful and we haven’t created a custom media type (yet). While this will happen in the third module, our best option right now is to go with the most popular approach: versioning in the URL.
It is important to understand how it works since this is widely used, but I wouldn’t recommend using it for a new project. We are only going to check how to do this without implementing it. We will work on actually versioning Alexandria in the third module.
If we don’t want to change anything in the application while releasing it with the version in the URI like /api/v1/
, we can add a new scope in the routes file.
# config/routes.rb
Rails.application.routes.draw do
scope :api do
scope :v1 do
resources :books, except: :put do
get :download, to: 'downloads#show'
end
resources :authors, except: :put
resources :publishers, except: :put
resources :users, except: :put
resources :user_confirmations, only: :show, param: :confirmation_token
resources :password_resets, only: [:show, :create, :update],
param: :reset_token
resources :access_tokens, only: :create do
delete '/', action: :destroy, on: :collection
end
resources :purchases, only: [:index, :show, :create]
get '/search/:text', to: 'search#index'
end
end
root to: 'books#index'
end
rake routes
Output
Prefix Verb URI Pattern Controller#Action
book_download GET /api/v1/books/:book_id/download(.:format) downloads#show
books GET /api/v1/books(.:format) books#index
POST /api/v1/books(.:format) books#create
book GET /api/v1/books/:id(.:format) books#show
PATCH /api/v1/books/:id(.:format) books#update
PUT /api/v1/books/:id(.:format) books#update
DELETE /api/v1/books/:id(.:format) books#destroy
[Hidden Routes]
With this, we have the URI including a version number that get routed to our controllers. We didn’t have to change anything else and we could wait until creating a second version before reorganizing the application.
If we fast-forward a few months, we now have some breaking changes. It’s time to set up the v2
.
# config/routes.rb
Rails.application.routes.draw do
scope :api do
namespace :v1 do
# Hidden Routes
end
namespace :v2 do
# Hidden Routes
end
end
root to: 'books#index'
end
Namespaces, unlike scopes, require controllers to be correctly contained in modules. To reflect this change, we would need to move our controllers into folders named after our modules.
We currently have this structure in the app/
folder.
connectors/
controllers/
my_models_controller.rb
errors/
mailers/
models/
my_model.rb
policies/
my_model_policy.rb
presenters/
my_model_presenter.rb
query_builders/
representation_builders/
serializers/
uploaders/
views/
Depending on what we need to break in the v2
, we can only update specific folders. The mandatory change, to follow our new routes, is to namespace the controllers folder.
connectors/
controllers/
v1/
my_models_controller.rb
v2/
my_models_controller.rb
errors/
mailers/
models/
my_model.rb
policies/
my_model_policy.rb
presenters/
my_model_presenter.rb
query_builders/
representation_builders/
serializers/
uploaders/
views/
Then we can fine-tune what else needs to be versioned. Is it the serialization? The presentable model? We could, for example, decide to version our presenters to remove some fields and add new ones.
connectors/
controllers/
v1/
my_models_controller.rb
v2/
my_models_controller.rb
errors/
mailers/
models/
my_model.rb
policies/
my_model_policy.rb
presenters/
v1/
my_model_presenter.rb
v2/
my_model_presenter.rb
query_builders/
representation_builders/
serializers/
uploaders/
views/
If we had to remove the cover
field from the list of attributes that can be used to build books, we would end up with the following two presenters.
# app/presenters/v1/book_presenter.rb
module V1
class BookPresenter < BasePresenter
cached
build_with :id, :title, :subtitle, :isbn_10, :isbn_13, :description,
:released_on, :publisher_id, :author_id, :created_at,
:updated_at, :cover, :price_cents, :price_currency
related_to :publisher, :author
sort_by :id, :title, :released_on, :created_at, :updated_at,
:price_cents, :price_currency
filter_by :id, :title, :isbn_10, :isbn_13, :released_on,
:publisher_id, :author_id, :price_cents,
:price_currency
def cover
path = @object.cover.url.to_s
path[0] = '' if path[0] == '/'
"#{root_url}#{path}"
end
end
end
# app/presenters/v2/book_presenter.rb
module V2
class BookPresenter < BasePresenter
cached
build_with :id, :title, :subtitle, :isbn_10, :isbn_13, :description,
:released_on, :publisher_id, :author_id, :created_at,
:updated_at, :price_cents, :price_currency
related_to :publisher, :author
sort_by :id, :title, :released_on, :created_at, :updated_at,
:price_cents, :price_currency
filter_by :id, :title, :isbn_10, :isbn_13, :released_on,
:publisher_id, :author_id, :price_cents, :price_currency
end
end
Other entities in Alexandria can be versioned as well. The problem with this approach is that we need to version entire controllers and duplicate code. In the third module we will version the media type, which should limit the scope of required changes.
Reset the changes we made to the config/routes.rb
file.
git checkout config/routes.rb
Run all the tests to ensure that everything is working.
rspec
Success (GREEN)
...
Finished in 13.56 seconds (files took 3.2 seconds to load)
323 examples, 0 failures
Let’s push the changes.
git status
Output
On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: Gemfile
modified: Gemfile.lock
modified: app/controllers/access_tokens_controller.rb
modified: app/controllers/application_controller.rb
modified: app/controllers/concerns/authentication.rb
modified: app/models/api_key.rb
modified: app/presenters/base_presenter.rb
modified: app/presenters/book_presenter.rb
modified: app/query_builders/paginator.rb
modified: app/serializers/alexandria/serializer.rb
modified: config/application.rb
modified: config/environments/test.rb
modified: config/initializers/cors.rb
Untracked files:
(use "git add <file>..." to include in what will be committed)
app/controllers/cors_controller.rb
app/serializers/alexandria/collection_serializer.rb
app/serializers/alexandria/entity_serializer.rb
frontend/
no changes added to commit (use "git add" and/or "git commit -a")
Stage them.
git add .
Commit the changes.
git commit -m "Implement compression, caching and CORS"
Output
[master bb1a283] Implement compression, caching and CORS
19 files changed, 317 insertions(+), 70 deletions(-)
create mode 100644 app/controllers/cors_controller.rb
create mode 100644 app/serializers/alexandria/collection_serializer.rb
create mode 100644 app/serializers/alexandria/entity_serializer.rb
rewrite config/initializers/cors.rb (99%)
create mode 100644 frontend/index.html
Push to GitHub.
git push origin master
In this chapter, we went over a few ways to improve Alexandria. Our API should now be faster thanks to caching and compression, and JavaScript applications can now communicate with it thanks to our CORS implementation.
In the next chapter, we will talk about documentation and the various tools we can use to describe our web API.