The ClojureWerkz Blog

News and updates about ClojureWerkz projects

Cassaforte 1.2.0 Is Released

TL;DR

Cassaforte is a new Clojure client for Apache Cassandra 1.2+. It is built around CQL 3 and focuses on ease of use. You will likely find that using Cassandra from Clojure has never been so easy.

1.2.0 is a minor release that introduces one minor feature, fixes a couple of bugs, and makes Cassaforte compatible with Cassandra 2.0.

Changes between Cassaforte 1.1.x and 1.2.0

Cassandra Java Driver Update

Cassandra Java driver has been updated to 1.0.3 which supports Cassandra 2.0.

Fix problem with batched prepared statements

insert-batch didn’t play well with prepared statements, problem fixed now. You can use insert-batch normally with prepared statements.

Hayt query generator update

Hayt is updated to 1.1.3 version, which contains fixes for token function and some internal improvements that do not influence any APIs.

Added new Consistency level DSL

Consistency level can now be (also) passed as a symbol, without resolving it to ConsistencyLevel instance:

1
2
(client/with-consistency-level :quorum
       (insert :users r))

Please note that old DSL still works and is supported.

Password authentication supported

Password authentication is now supported via the :credentials option to client/build-cluster. Give it a map with username and password:

1
2
3
(client/build-cluster {:contact-points ["127.0.0.1"]
                       :credentials {:username "ceilingcat" :password "ohai"}
                       ;; ...

Query DSL added for managing users create-user, alter-user, drop-user, grant, revoke, list-users, list-permissions for both multi and regular sessions.

News and Updates

New releases and updates are announced on Twitter. Cassaforte also has a mailing list, feel free to ask questions and report issues there.

Cassaforte is a ClojureWerkz Project

Cassaforte is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Monger, a Clojure MongoDB client for a more civilized age
  • Elastisch, a minimalistic Clojure client for ElasticSearch
  • EEP, a Clojure library for stream (event) processing
  • Neocons, a Clojure client for the Neo4J REST API
  • Quartzite, a powerful scheduling library

and several others. If you like Cassaforte, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

Michael on behalf of the ClojureWerkz Team.

Langohr 1.5.0 Is Released

TL;DR

Langohr is a Clojure RabbitMQ client that embraces AMQP 0.9.1 Model.

1.5.0 is a minor feature release.

Changes between Langohr 1.4.0 and 1.5.0

Automatic Recovery Improvements

Automatic recovery of channels that are created without an explicit number now works correctly.

Contributed by Joe Freeman.

clj-http Upgrade

clj-http dependency has been updated to 0.7.6.

Clojure 1.3 is No Longer Supported

Langohr requires Clojure 1.4+ as of this version.

More Convenient Publisher Confirms Support

langohr.confirm/wait-for-confirms is a new function that waits until all outstanding confirms for messages published on the given channel arrive. It optionally takes a timeout:

1
2
3
(langohr.confirm/wait-for-confirms ch)
;; wait up to 200 milliseconds
(langohr.confirm/wait-for-confirms ch 200)

Change Log

Langohr change log is available on GitHub.

Langohr is a ClojureWerkz Project

Langohr is part of the group of libraries known as ClojureWerkz, together with

  • Elastisch, a minimalistic well documented Clojure client for ElasticSearch
  • Cassaforte, a Clojure Cassandra client built around CQL 3.0
  • Monger, a Clojure MongoDB client for a more civilized age
  • Neocons, a client for the Neo4J REST API
  • Quartzite, a powerful scheduling library

and several others. If you like Langohr, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

Michael on behalf of the ClojureWerkz Team

Route One 1.0.0-rc2 Is Released

Route One is a Clojure DSL for URL/URI/path generation from a route map, compatible with Compojure’s Clout.

1.0.0-rc2 is a development milestone release that further improves Compojure integration.

Changes between Route One 1.0.0-rc1 and 1.0.0-rc2

Tight Compojure integration

It is now possible to define named Compojure routes with Route One:

1
2
3
4
5
6
7
(ns my-app
  (:require [compojure.core :as compojure :as compojure])
  (:use clojurewerkz.route-one.compojure))

(compojure/defroutes main-routes
  (GET about request (handlers.root/root-page request)) ;; will use /about as a template
  (GET documents request (handlers.root/documents-page request)) ;; will use /documents as a template)

This will generate main-routes in same exact manner Compojure generates them, but will also add helper functions for building urls (about-path, about-url, documents-path, document-url and so on).

To use this feature, you’ll have to bring in Compojure as a dependency to your project:

1
[compojure "1.1.5"]

Change log

Route One change log is available on GitHub.

Route One is a ClojureWerkz Project

Route One is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Elastisch, a small feature complete Clojure client for ElasticSearch
  • Cassaforte, a Clojure Cassandra client
  • Monger, a Clojure MongoDB client for a more civilized age
  • Titanium, a Clojure graph library
  • Neocons, a client for the Neo4J REST API
  • Welle, a Riak client with batteries included
  • Quartzite, a powerful scheduling library

and several others. If you like Elastisch, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

Michael on behalf of the ClojureWerkz Team

Introducing Meltdown

Can You Please Pass Me a Message?

Last week we introduced a new project we’ve been working on for a few months, EEP.

To make event processing concurrent and parallel in EEP, there needs to be a way to transfer events (messages) from threads that produce them to threads that consume them. In addition, it is desirable to also be able to filter events consumers are interested in.

There are multiple message passing libraries available on the JVM. Some of them are stable and very mature but have very small contributor base, others are extremely actively used but less convenient to consume from Clojure, some do not offer the features we wanted. So we decided to wait and see, and not make the choice yet.

Enter Reactor

This move turned out to be the right one: a couple of months after the first EEP prototype was put up on GitHub to ask for some feedback from our friends (hi, Darach!), folks at Pivotal introduced Reactor, a “foundational framework for asynchronous programming on the JVM”.

Reactor core is an event (message) passing library that has several features that we found very handy for stream processing:

  • Consumer may consume events selectively (events have routing keys)
  • Message passing implementation is pluggable
  • It’s very easy to run multiple reactors in the same JVM
  • It can be backed by LMAX Disruptor, which offers very high throughput and really low stable latency thanks to smart false sharing elimination techniques

After seeing that it only took 2 hours to write a first Meltdown version with some tests, we were convinced Reactor is a great choice for our needs.

Meltdown Goes to School

Since Reactor was a really young project by the time we started using it (we may have been the first people outside of Pivotal who have built something on top of Reactor at that point), it took a few iterations and serious breaking API changes in both libraries until we’ve been confident enough to port EEP to it.

In the end it took Alex a few hours to make the switch, which again demonstrates how well Reactor and Meltdown fit EEP.

The Future of Meltdown

Meltdown still does not cover all of Reactor’s functionality and Reactor is under active development, so Meltdown will stay alpha for some time. Since we’ve announced it here, we will do our best with writing some initial documentation guides for it.

In the meantime, feel free to give Meltdown a try. Check it out in the REPL, try modeling a problem that needs message passing with it. It likely will already take you a long way (except for the really poor documentation).

You can watch our progress on GitHub and follow the news on Twitter @clojurewerkz.

Meltdown is a ClojureWerkz Project

Meltdown is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Cassaforte, a Clojure Cassandra client built around CQL
  • Elastisch, a minimalistic Clojure client for ElasticSearch
  • EEP, a Clojure event processing library
  • Monger, a Clojure MongoDB client for a more civilized age
  • Neocons, a client for the Neo4J REST API
  • Quartzite, a powerful scheduling library

and several others. If you like EEP, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

@michaelklishin on behalf of the ClojureWerkz Team

Elastisch 1.3.0-beta2 Is Released

TL;DR

Elastisch is a battle tested, small but feature rich and well documented Clojure client for ElasticSearch. It supports virtually every Elastic Search feature and has solid documentation.

1.3.0-beta2 is a development milestone release that is compatible with 1.2.0 except for dropped Clojure 1.3 support.

Changes between Elastisch 1.3.0-beta1 and 1.3.0-beta2

Bulk Index and Delete Operations Support More Options

Bulk index and delete operations support _parent and _routing keys.

Contributed by Baptiste Fontaine.

Clojure 1.3 Support Dropped

Elastisch now requires Clojure 1.4.

Changes between Elastisch 1.2.0 and 1.3.0-beta1

Cheshire Update

Cheshire dependency has been upgraded to version 5.2.0.

clj-http Update

clj-http dependency has been upgraded to version 0.7.6.

Change log

Elastisch change log is available on GitHub.

Thank You, Contributors

Kudos to Baptiste Fontaine for contributing to this release.

Elastisch is a ClojureWerkz Project

Elastisch is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Monger, a Clojure MongoDB client for a more civilized age
  • Cassaforte, a Clojure Cassandra client
  • Titanium, a Clojure graph library
  • Neocons, a client for the Neo4J REST API
  • Welle, a Riak client with batteries included
  • Quartzite, a powerful scheduling library

and several others. If you like Elastisch, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

Michael on behalf of the ClojureWerkz Team

Introducing EEP, a Stream Processing Library

Drinking From The Data Firehose

If you work with data a lot, and you have a lot of it, it becomes nearly impossible to process the entire corpus in one run. Sometimes you simply can’t do that at all, since the data is coming in form of events. Moreover, as your codebase grows, you’ll be forced to create a library that allows you get most of routing out of the way, so that you could pay more attention to details rather than to grasp the entire flow.

There’s been lot of progress on this subject lately in the Clojure community. Prismatic folks released their processing library, Graph. Kyle Kingsbury created Riemann that uses a similar approach internally. Zach Tellman, creator of Aleph, released Lamina for working with streams, a couple of years ago. Eventsourced, Pipes from Tinkerpop, and Storm by Nathan Marz can also be counted as good example.

The basic idea remains the same. You have a stream of data coming in in form of events. You build a topology of functions that broadcast, transform, filter, aggregate or save state of said events. At any given point in time you can know the intermediate result of calculation, in case when stream of events is being fetched from some data source, or can get results interactively (real-time, yo), and react to the system behavior.

After trying pretty much same approach out for quite some things, it did work quite well. Of course, depending on the required throughput my approach may be not exactly what you want to use in your production system, but most likely the interface will still be similar to the alternatives, even though implementation details will vary.

Today we are release our own library into this melting pot of JVM-based stream processing projects.

Enter EEP

EEP is our own young entrant to this space.

When we’ve first started investigating state of the art of event processing, intuitive choices for inspiration were Erlang (gen_event) and Node.js (don’t judge!). They certainly have very different approaches to concurrency but there are similarities.

In gen_event, two functions that are used more often than others are gen_event:add_handler and gen_event:notify. The former subscribes you to an occuring event, the latter sends events to the emitter, which dispatches them to the handler. Node.js approach is very similar: multiple handlers per event type, routed on emission.

Next we will briefly cover EEP concepts and demonstrate what it feels like to use it with some code examples.

Core concepts

Core concepts in EEP are:

  • Emitter is responsible for handler registration and event routing. It holds everything together.

  • Event is a tuple dispatched by world into the emitter. Event is an arbitrary tuple of user-defined structure. There’s no validation provided internally for structure.

  • Event Type is a unique event type identifier, used for routing. It can be number, symbol, keyword, string or anything else. All the events coming into Emitter have type associated with them.

  • Handler is a function and optional state attached to it. Function is a callback, executed whenever Event Type is matched for the event. Single handler can be used for multiple Event Types, but Event Type can only have one Handler at a time.

Building Data Flows

Now, with these building blocks we can go ahead and start building processing graphs. For that, we need to define several types of handlers that are aware of what data looks like.

  • filter receives events of a certain type, and forwards ones for which filter-fn returns true to one or more other handlers

  • splitter receives events of a certain type, and dispatches them to type returned by predicate function. For example, you can split stream of integers to even and odd ones and process them down the pipeline differently.

  • transformer defines a transformer that gets typed tuples, applies transformation function to each one of them and forwards them to one or more other handlers. It’s similar to applying map to elements of a list, except for function is applied to stream of data.

  • aggregator is initialized with initial value, then gets events of a certain type and aggregates state by applying aggregate function to current state and an incoming event. It’s similar to reduce function in Clojure, except for it’s applied to the stream of data.

  • multicast receives events of a certain type and broadcasts them to several handlers with different types. For example, whenever an alert is received, you may want to send notifications via email, IRC, Jabber and append event to the log file.

  • observer receives events of a certain type and runs function (potentially with side-effects) on each one of them.

  • buffer receives events of a certain type and stores them in a circular buffer with given capacity. As soon as capacity is reached, it distributes them to several other handlers.

  • rollup acts in a manner similar to buffer, except for it’s time-bound but not capacity-bound, so whenever a time period is reached, it dispatches all the events to several other handlers.

Let’s take a closer look at an example of stream processing. For instance, you have a discrete stream of events coming from the web servers, that hold information about page loads on your website. Interesting infromation to monitor would be:

  • host (where the page load occured)
  • response code (HTTP status of the response)
  • user agent infromation
  • response duration
  • url of the response

From that single payload type you can already yield an incredible amount of information, for example

  • slowest/fastest response time
  • last 20 response times
  • total number of responses (per given amount of time)
  • response number breakdown by status code
  • hottest URLs on the website (page loads breakdown by url)
  • user agent breakdown
  • count only 404s

If you do it in the most straightforward way, you will end up with lots of ad-hoc code, that is related to routing: metrics that are related to response time only need response time, you’ll need to have buffers for rollups that will aggregate data for a certain period and stream it to next computation unit and so on.

We went through many use-cases that are related to discrete data aggregation and worked out several entities that will help to create calculation topologies. Besides that, you’ll need a queue that will have some build-in routing capabilities and will manage buffered aggregation, filters, stream data down to several handlers and many other, not-so-obvious things.

Now, we’ll need to declare the metrics we want to aggregate on. In order to make our processing graph more reusable, we’ll separate metric retrieval from metric calculation. This way, we’ll be able to reuse same aggregate function for several types of metrics and will be able to achieve desired result by providing appropriate routing.

1
2
3
4
5
6
7
8
(def emitter (new-emitter))

;; Redistribute an event to all transformers and aggregators
(defmulticast emitter :page-load [:total-request-count
                                  :load-time-metrics
                                  :status-code-metrics
                                  :user-agent-metrics
                                  :url-metrics])

Now, let’s define transformers that receive a complete event, take a single field out of it and redistribute it further down:

1
2
3
4
5
6
7
8
9
10
11
;; Take only :load-time for metrics related to load time
(deftransformer emitter :load-time-metrics :load-time [:load-time-slowest :load-time-fastest :load-times-last-20])

;; Take only :status for metrics related to status code
(deftransformer emitter :status-code-metrics :status :count-by-status-code)

;; Take only :user-agent for metrics related to user agent
(deftransformer emitter :user-agent-metrics :user-agent :count-by-user-agent)

;; Take only :url for metrics related to url
(deftransformer emitter :url-metrics :url :count-by-url)

Now, we can define our aggregate functions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
;; Define an counter aggregator for all requests
(defaggregator emitter :total-request-count (fn [acc _] (inc acc)) 0)

;; Preserve only slowest load time
(defaggregator emitter :load-time-slowest (fn [previous current]
                                            (if (and previous (< previous current))
                                              previous
                                              current)) nil)
;; Preserve only fastest load time
(defaggregator emitter :load-time-fastest (fn [previous current]
                                            (if (and previous (> previous current))
                                              previous
                                              current)) nil)

(let [count-aggregate (fn [acc metric]
                        (assoc acc metric (inc (get acc metric 0))))]
  ;; Aggregate counts by status code
  (defaggregator emitter :count-by-status-code count-aggregate {})

  ;; Aggregate counts by user agent code
  (defaggregator emitter :count-by-user-agent count-aggregate {})

  ;; Aggregate counts by user url
  (defaggregator emitter :count-by-url count-aggregate {}))

;; Define a buffer for last 20 events
(defbuffer emitter :load-times-last-20 20)

Now our graph is ready, we can visualize it:

Graph Visualisation

In order to pump some data into the processing graph, let’s generate some random events:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
(defn rand-between
  "Generates a random number between two points"
  [start end] (+ start (rand-int (- end start))))

(def hosts ["host01" "host02" "host03" "host04" "host05"])
(def urls ["/url01" "/url02" "/url03" "/url04" "/url05"])
(def user-agents ["Chrome" "Mozilla" "Safari" "Firefox"])

(def status-codes [200 404 500 302])

(defn gen-events
  "Generates an infinite stream of random data"
  ([]
     (gen-events [] 0))
  ([c i]
     (lazy-cat c (gen-events
                  [{:event_id i
                    :host        (get hosts (rand-between 0 (count hosts)))
                    :status      (get status-codes (rand-between 0 (count status-codes)))
                    :url         (get urls (rand-between 0 (count urls)))
                    :user-agent  (get user-agents (rand-between 0 (count user-agents)))
                    :load-time   (rand-between 300 500)}]
                  (inc i)))))


(defn median
  "Calculates median for given array of numbers"
  [data]
  (let [sorted (sort data)
        n (count data)
        i (bit-shift-right n 1)]
    (if (even? n)
      (/ (+ (nth sorted (dec i)) (nth sorted i)) 2)
      (nth sorted (bit-shift-right n 1)))))

And pump data to the emitter:

1
2
3
4
5
6
7
8
9
10
11
(doseq [event (take 20000 (gen-events))]
  (notify emitter :page-load event))

(println "Total request count: " (state (get-handler emitter :total-request-count)))
(println "Count by url:" (state (get-handler emitter :count-by-url)))
(println "Count by user agent:" (state (get-handler emitter :count-by-user-agent)))
(println "Count by user status:" (state (get-handler emitter :count-by-status-code)))
(println "Fastest load time:" (state (get-handler emitter :load-time-fastest)))
(println "Slowest load time:" (state (get-handler emitter :load-time-slowest)))
(println "Last 20 load times:" (state (get-handler emitter :load-times-last-20)))
(println "Median of fast 20 load times:" (float (median (state (get-handler emitter :load-times-last-20)))))

Why Stream Processing

There’re many advantages of using such an approach for data processing. First of all, you whenever you’re working with a stream, you can have latest data available at all times. There’s no need to go through an entire corpus of data, only get the state of the handlers of your interest.

Every handler is reusable, and you can generate graphs in such a way, there’s not a single entrypoint to each handler, but there’re several ones. If internal EEP handlers are not enough for you, you can always reuse IHandler protocol and extend it with any other handler of your preference, that would give you an ability to have sliding, tumbling, monotonic windows, different types of buffers, custom aggregators and so on.

What You Can Do With It

Event streams are very common in every system. One application that’s been quite popular in recent years is “logs as data” processed as a stream. Every production system produces a stream of events and it becomes increasingly obvious to both engineers and business owners alike that tapping into that data can bring a lot of benefits.

To make this more useful, you can use stream processing libraries such as EEP to propagate events to mobile and Web clients, publish them to other apps using messaging technologies such as RabbitMQ, generate alerts and much more.

EEP is a generic project that can be used in a wide range of cases.

Enter Meltdown

Initial implementation of EEP was based on thread pools, and was functioning reasonably well but after some research we decided to take a look if we can use ring buffer abstraction from Disruptor. After several interactions we ended up using Reactor, a new event-driven programming framework from Pivotal. It was a game-changer. EEP got way faster, routing got so much easier (and much faster, too).

Our Clojure interface to Reactor is a separate library, creatively named Meltdown. Now you can deploy a meltdown into production!

We will cover Meltdown in more details in a separate blog post.

(Some) Future Plans

We’re working hard to bring a good support for numerical analysis and good set of statistical functions to be available at your fingertips right in EEP. You can watch our progress on GitHub and follow the news on Twitter @clojurewerkz.

EEP is a ClojureWerkz Project

EEP is part of the group of libraries known as ClojureWerkz, together with

  • Monger, a Clojure MongoDB client for a more civilized age
  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Cassaforte, a Clojure Cassandra client built around CQL
  • Elastisch, a minimalistic Clojure client for ElasticSearch
  • Welle, a Riak client with batteries included
  • Neocons, a client for the Neo4J REST API
  • Quartzite, a powerful scheduling library

and several others. If you like EEP, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

@ifesdjeen on behalf of the ClojureWerkz Team

Money 1.4.0 Is Released

Money is a Clojure library that deals with monetary amounts and currencies, built on top of Joda Money.

1.4.0 is a minor release that has no breaking API changes.

Changes between Money 1.3.0 and 1.4.0

Roudning Multiplication

clojuremowerkz.money.amount/multiply now provides another arity that allows for multiplication by a double, just like with divide:

1
2
(ams/multiply (ams/amount-of cu/USD 45) 10.1 :floor)
;= USD 454.50

Change Log

Money change log is available on GitHub.

Money is a ClojureWerkz Project

Money is part of the group of libraries known as ClojureWerkz, together with

  • Elastisch, a minimalistic Clojure client for ElasticSearch
  • Monger, a Clojure MongoDB driver for a more civilized age
  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Welle, a Riak client with batteries included
  • Neocons, a client for the Neo4J REST API
  • Quartzite, a powerful scheduling library

and several others. If you like Money, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

@michaelklishin on behalf of the ClojureWerkz Team

Spyglass 1.1.0 Is Released

TL;DR

Spyglass is a very fast Clojure client for Memcached (as well as Couchbase and Kestrel) built on top of SpyMemcached.

1.1.0 is a minor release that introduces several minor features andhas breaking changes.

Changes between 1.0.0 and 1.1.0

Clojure 1.4 Requirement

Spyglass 1.1.0 drops support for Clojure 1.3.

Heroku Add-on Support

By using SpyMemcached 2.8.9, you now can use Spyglass with Heroku Memcached add-ons:

1
2
3
4
5
6
(defproject my-great-project "0.1.0-SNAPSHOT"
  :dependencies [[org.clojure/clojure "1.5.1"]
                 [clojurewerkz/spyglass "1.1.0-beta6-SNAPSHOT"
                  :exclusions [spy/spymemcached]]
                 [spy/spymemcached "2.8.9"]]
  :repositories {"spy-memcached" {:url "http://files.couchbase.com/maven2/"}})

Contributed by Connor Mendenhall.

Clojure 1.5 By Default

Spyglass now depends on org.clojure/clojure version 1.5.1. It is still compatible with Clojure 1.3+ and if your project.clj depends on a different version, it will be used, but 1.5 is the default now.

We encourage all users to upgrade to 1.5, it is a drop-in replacement for the majority of projects out there.

Asynchronous Cache Store

Spyglass now ships both sync and async implementations of clojure.core.cache.

To instantiate async store, use clojurewerkz.spyglass.cache/async-spyglass-cache-factory. clojurewerkz.spyglass.cache/spyglass-cache-factory was renamed to clojurewerkz.spyglass.cache/sync-spyglass-cache-factory.

Contributed by Joseph Wilk.

Fix Authentication Support

clojurewerkz.spyglass.client/text-connection and clojurewerkz.spyglass.client/bin-connection no longer fail when credentials are passed in.

Empty gets Responses

clojurewerkz.spyglass.client/gets now correctly handles responses for keys that do not exist.

GH issue: #4.

SASL (Authentication) Support

clojurewerkz.spyglass.client/text-connection and clojurewerkz.spyglass.client/bin-connection now support credentials:

(ns my.service (:require [clojurewerkz.spyglass.client :as c]))

;; uses credentials from environment variables, e.g. on Heroku: (c/text-connection “127.0.0.1:11211” (System/getenv “MEMCACHE_USERNAME”)

                                 (System/getenv "MEMCACHE_PASSWORD"))

When you need to fine tune things and want to use a custom connection factory, you need to instantiate auth descriptor and pass it explicitly, like so:

1
2
3
4
5
6
7
8
(ns my.service
  (:require [clojurewerkz.spyglass.client :as c])
  (:import [net.spy.memcached.auth AuthDescriptor]))

(let [ad (AuthDescriptor/typical (System/getenv "MEMCACHE_USERNAME")
                                 (System/getenv "MEMCACHE_PASSWORD"))]
  (c/text-connection "127.0.0.1:11211" (c/text-connection-factory :failure-mode :redistribute
                                                                  :aut-descriptor ad)))

Blocking Deref for Futures

Futures returned by async Spyglass operations now implement “blocking dereferencing”: they can be dereferenced with a timeout and default value, just like futures created with clojure.core/future and similar.

Contributed by Joseph Wilk.

Support For Configurable Connections

New functions clojurewerkz.spyglass.client/text-connection-factory and clojurewerkz.spyglass.client/bin-connection-factory provide a Clojuric way of instantiating connection factories. Those factories, in turn, can be passed to new arities of clojurewerkz.spyglass.client/text-connection and clojurewerkz.spyglass.client/bin-connection to control failure mode, default transcoder and so on:

1
2
3
4
(ns my.service
  (:require [clojurewerkz.spyglass.client :as c]))

(c/text-connection "127.0.0.1:11211" (c/text-connection-factory :failure-mode :redistribute))

core.cache Implementation

clojurewerkz.spyglass.cache now provides a clojure.core.cache implementation on top of Memcached:

1
2
3
4
5
6
7
8
9
(ns my.service
  (:require [clojurewerkz.spyglass.client :as sg]
            [clojurewerkz.spyglass.cache  :as sc]
            [clojure.core.cache           :as cc]))

(let [client (sg/text-connection)
      cache  (sc/sync-spyglass-cache-factory)]
      (cc/has? cache "a-key")
      (cc/lookup cache "a-key"))

SyncSpyglassCache uses synchronous operations from clojurewerkz.spyglass.client. Asynchronous implementation that returns futures will be added in the future.

SpyMemcached 2.8.10

SpyMemcached has been upgraded to 2.8.10.

Improved Couchbase Support

clojurewerkz.spyglass.couchbase/connection is a new function that connects to Couchbase with the given bucket and credentials. It returns a client that regular clojurewerkz.spyglass.memcached functions can use.

Change Log

We recommend all users to give 1.1.0 a try.

Spyglass change log is available on GitHub.

Spyglass is a ClojureWerkz Project

Spyglass is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Elastisch, a minimalistic Clojure client for ElasticSearch
  • Monger, a Clojure MongoDB client for a more civilized age
  • Neocons, a feature rich idiomatic Clojure client for the Neo4J REST API * Welle, a Riak client with batteries included
  • Quartzite, a powerful scheduling library

and several others. If you like Spyglass, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

Michael on behalf of the ClojureWerkz Team

Langohr 1.4.1 Is Released

TL;DR

Langohr is a Clojure RabbitMQ client that embraces AMQP 0.9.1 Model.

1.4.1 is a bug fix release.

Changes between Langohr 1.4.0 and 1.4.1

Automatic Recovery Fix

Automatic recovery now can be enabled without causing an exception.

Change Log

Langohr change log is available on GitHub.

Langohr is a ClojureWerkz Project

Langohr is part of the group of libraries known as ClojureWerkz, together with

  • Elastisch, a minimalistic well documented Clojure client for ElasticSearch
  • Welle, a Riak client with batteries included
  • Monger, a Clojure MongoDB client for a more civilized age
  • Neocons, a client for the Neo4J REST API
  • Quartzite, a powerful scheduling library

and several others. If you like Langohr, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

Michael on behalf of the ClojureWerkz Team

Route One 1.0.0-rc1 Is Released

What is Route One

Route One is a Clojure DSL for URL/URI/path generation from a route map, compatible with Compojure’s Clout.

Route One is intentionally a very small library that lets you do two things:

  • Ability to define routes
  • Path generation from routes

Route One can be used as part of Web applications, mail delivery services and any other application that needs to generate URLs using a predefined map of routes.

Supported Clojure Versions

Route One targets Clojure 1.4+, tested against 3 Clojure versions x 3 JDKs on travis-ci.org, and is released under the Eclipse Public License.

Documentation and Examples

Here’s what Route One API looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
(ns my.app
  (:require [clojurewerkz.route-one.core :refer :all])

;; define your routes
(defroute about "/about")
(defroute faq "/faq")
(defroute help "/help")
(defroute documents "/docs/:title")
(defroute category-documents "/docs/:category/:title")
(defroute documents-with-ext "/docs/:title.:ext")

;; generate relative paths (by generated fns)
(documents-path :title "a-title") ;; => "/docs/a-title"
(documents-path :title "ohai") ;; => "/docs/ohai"

(path-for "/docs/:category/:title" { :category "greetings" :title "ohai" }) ;; => "/docs/greetings/ohai"
(path-for "/docs/:category/:title" { :category "greetings" }) ;; => IllegalArgumentException, because :title value is missing

(with-base-url "https://myservice.com"
  (url-for "/docs/title"  { :title "ohai" }) ;; => "https://myservice.com/docs/title"
  (url-for "/docs/:title" { :title "ohai" }) ;; => "https://myservice.com/docs/ohai"
  (url-for "/docs/:category/:title" { :category "greetings" :title "ohai" }) ;; => "https://myservice.com/docs/greetings/ohai"
  (url-for "/docs/:category/:title" { :category "greetings" }) ;; => IllegalArgumentException, because :title value is missing
)

;; generate relative paths (by generated fns)
(with-base-url "https://myservice.com"
  (documents-url :title "a-title") ;; => "https://myservice.com/docs/a-title"
  (documents-url :title "a-title" :category "greetings") ;; => "https://myservice.com/docs/greetings/a-title"
)

Learn more in the documentation.

License

The source is available on GitHub. We also use GitHub to track issues.

The ClojureWerkz Team