AppDynamics Launches Extension BuildPack for Pivotal CloudFoundry Applications

Not long ago, we told you about Pivotal Cloud Foundry (PCF) buildpacks and service brokers, and all the ways you can deploy AppDynamics agents in a PCF environment.

Buildpack is the key concept here. When you do a deployment to PCF, the buildpack is your foundation. You include the app with the buildpack, which incorporates all the logic needed to connect to various PCF services. Because the Cloud Foundry platform includes a mechanism for adding support for third-party services like AppDynamics, it’s really easy to add our APM instrumentation to all your applications without having to make any code changes. We’ve been doing this for some time, of course, and Pivotal recently recognized AppDynamics for our outstanding solutions and services, specifically our support for .NET in the Pivotal environment.

Here’s a new example of how we’re staying on the front edge of PCF development. For the first time, we’re using an innovative Cloud Foundry feature called multi-buildpacks. Starting with v4.5.514 of the AppDynamics Application Monitoring for PCF tile, we’re offering an AppDynamics Extension Buildpack that works in tandem with standard buildpacks using Cloud Foundry’s multi-buildpack workflow.

The .NET team at PCF has been leading the way in multi-buildpack development (more on this in a bit) and we’ve recognized the value of this approach. Now our goal is to apply the same model to AppDynamics’ APM support for all PCF applications.

The Standard Buildpack Model

With a traditional buildpack, we build the logic for integrating AppDynamics agents directly into the official Cloud Foundry community buildpack. We test our code against the main buildpack code, which is maintained by Pivotal on behalf of the Cloud Foundry community. We then send a pull request to Pivotal, which takes our code and releases it as an official part of the buildpack. This is a well-established model carefully managed by Pivotal and adhered to by third-party service providers like AppDynamics. It works because it’s a well-known and understood mechanism. But there’s a better way to do it.

The Advantages of Multi-Buildpack

Pivotal’s multi-buildpack concept is like a layer cake. The main buildpack—the base layer—is the official community buildpack. Third-party providers like AppDynamics provide additional functionality (or layers) on top of the base layer. The end result is a multi-buildpack that can be deployed as essentially a single piece. For example, here’s how we’d push a .NET HWC application with the AppDynamics-specific extension (appdbuildpack) and the base buildpack from Cloud Foundry (hwc_buildpack):

cf push -b appdbuildpack -b hwc_buildpack -s windows2016

This is a good model with many benefits, including a clear separation of responsibilities. Pivotal is responsible for the core buildpack and how it links to the service broker and other parts of the Cloud Foundry platform. It also manages all the services your application needs, such as routing and deployment. Third-party providers like AppDynamics are responsible for how their agent installs. If a third-party service introduces a bug, the glitch won’t break the main buildpack.

From our perspective, another benefit of this model is that it gives AppDynamics more control over what goes inside our buildpack, such as custom configuration for our APM Agents. Suppose, for instance, you want to include a custom configuration definition file or custom logging capabilities. It’s very easy to do so. Our buildpack extension defines a folder where you can include the appropriate custom files when you push the application. Once deployed, the application will have the AppDynamics agent installed with the custom configuration file in place. This eliminates the need to fork a buildpack for the sake of customizing agent behavior.

From the customer’s perspective, the multi-buildpack model provides a strong support system. It’s very clear who they need to work with (e.g., AppDynamics or Pivotal) for help with specific components or services. Another plus is that we bundle this buildpack with the AppDynamics Service Broker tile. So when you install the latest version of our tile, it will automatically install the buildpack in your environment. And when you deploy an application using any of the main language buildpacks, our extension will be applied on top.

AppDynamics and Multi-Buildpack

Our goal isn’t simply to make AppDynamics work on PCF, it’s to make it work in the best way possible. We already have added support for .NET HWC applications and .NET Core to our AppDynamics Extension Buildpack, and we will soon bring this approach to other dynamic language environments as well, including Python, Go and NodeJS. We will also add support for the Java buildpack to do advanced configuration of AppDynamics Java Agents, although we will, of course, continue to support basic configuration in the standard Java buildpack.

See for yourself how the AppDynamics Extension BuildPack (Multi-BuildPack Approach) can make your life easier!

Three Productive Go Patterns to Put on Your Radar

By most definitions, with just 26 keywords, a terse spec, and a commitment to orthogonal features, Go is a simple language. Go programmers are expected to use the basic building blocks offered by the language to compose more complex abstractions. Over time, best-in-class solutions to frequently encountered problems tend to be discovered, shared, and replicated. These design patterns draw heritage from other languages, but often look and feel distinct in Go. I’d like to cast a spotlight on three patterns I use over and over again.

The Dependency Injection Pattern

Dependency injection (DI) is actually an umbrella term that can mean vastly different things depending on context. The core idea is this: Give or inject dependencies to a component, rather than have the component take dependencies from the environment. Beyond that, things can get a little complicated. Some people use DI to refer to dependency injection frameworks, typically a package or object into which you register dependencies and later inject them into components that use them, usually by some key schema. But this style of DI isn’t a good match for Go, primarily because Go lacks the dynamic typing required to serve a literate API. Most DI frameworks in Go resort to stringly typed keys (meaning variables are often typed as strings), and rely on reflection to reify the dependencies to concrete types or interfaces, which is always a red flag.

Instead, a more basic version of DI is particularly well-suited to Go programs. The inspiration comes from functional programming, specifically, the idea of closure scoping. And it’s nothing special, really: Just provide all the dependencies to a component as line items in the component’s constructor.

// NewHandler constructs and returns a useable request handler.

func NewHandler(

db *sql.DB,

requestDuration *metrics.Histogram,

logger *log.Logger,

) *Handler {

return &Handler{

db:     db,

dur:    requestDuration,

logger: logger,

}

}

The clear consequence of this pattern is that constructors begin to get very long, especially as business capability grows. That’s a cost. But there’s also a notable benefit. Namely, there’s great virtue in making dependencies explicit at the callsite, especially to future readers and maintainers of your code. It’s immediately obvious that the Handler takes and uses a database, a histogram, and a logger. There’s no need to hunt down dependency relationships far from the site of construction.

The writer pays a cost of keystrokes, but the reader receives the benefit of comprehension. Outside of hobby projects, we know that code is read far more often than it is written. It’s reasonable, then, to optimize for the benefit of reader—even if it comes at some expense to the writer. But we have some tricks up our sleeve to make long constructors for large components more tractable.

One approach is to use a tightly scoped config struct, containing only those dependencies used by the specific component. It’s typical to omit individual fields when building a struct, so constructors should detect nils, when appropriate, and provide sane default alternatives.

// NewHandler constructs and returns a useable request handler.

func NewHandler(c HandlerConfig) *Handler {

if c.RequestDuration == nil {

c.RequestDuration = metrics.NewNopHistogram()

}

if c.Logger == nil {

c.Logger = log.NewNopLogger()

}

return &Handler{

db:     c.DB,

dur:    c.RequestDuration,

logger: c.Logger,

}

}

// HandlerConfig captures the dependencies used by the Handler.

type HandlerConfig struct {

// DB is the backing SQL data store. Required.

DB *sql.DB



// RequestDuration will receive observations in seconds.

// Optional; if nil, a no-op histogram will be used.

RequestDuration *metrics.Histogram



// Logger is used to log warnings unsuitable for clients.

// Optional; if nil, a no-op logger will be used.

Logger *log.Logger

}

If a component has a few required dependencies and many optional dependencies, the functional options idiom may be a good fit.

// NewHandler constructs and returns a useable request handler.

func NewHandler(db *sql.DB, options ...HandlerOption) *Handler {

h := &Handler{

db:     c.DB,

dur:    metrics.NewNopHistogram(),

logger: log.NewNopLogger(),

}

for _, option := range options {

option(h)

}

return h

}

// HandlerOption sets an option on the Handler.

type HandlerOption func(*Handler)

// WithRequestDuration injects a histogram to receive observations in seconds.

// By default, a no-op histogram will be used.

func WithRequestDuration(dur *metrics.Histogram) HandlerOption {

return func(h *Handler) { h.dur = dur }

}

// WithLogger injects a logger to log warnings unsuitable for clients.

// By default, a no-op logger will be used.

func WithLogger(logger *log.Logger) HandlerOption {

return func(h *Handler) { h.logger = logger }

}

By using this simplified DI pattern, we’ve made the dependency graph explicit and avoided hiding dependencies in global state. It’s also worth using a simplified definition of dependency—that is, nothing more than something that a component uses to do its work. By this definition, loggers and metrics are clearly dependencies. So by extension, they should be treated identically to other dependencies. This can seem a bit awkward at first, especially when we’re used to thinking of e.g. loggers as incidental or ubiquitous. But by lifting them up to the regular DI mechanism, we do more than establish a consistent language for expressing needs-a relationships. We’ve made our components more testable by isolating them from the shared global environment. Good design patterns tend to have this effect—not only improving the thing they’re designed to improve, but also having positive knock-on effects throughout the program.

The Client-Side Interface Pattern

Concretely, interfaces are nothing more than a collection of methods that types can choose to implement. But semantically, interfaces are much more. They define behavioral contracts between components in a system. Understanding interfaces as contracts helps us decide where and how to define them. And just as contract testing in microservice architectures teaches us that the right place to write a contract is often with the consumer.

Consider a package with a type. Go programmers frequently model that type and its constructor like the following.

package foo

// widget is an unexported concrete type.

type widget struct{ /* ... */ }


func (w *widget) Bop(int) int                     { /* ... */ }

func (w *widget) Twist(string) ([]float64, error) { /* ... */ }

func (w *widget) Pull() (string, error)           { /* ... */ }



// Widget is an exported interface.

type Widget interface {

Bop(int) int

Twist(string) ([]float64, error)

Pull() (string, error)

}



// NewWidget constructor returns the interface.

func NewWidget() Widget { /* ... */ }

In our contract model of interfaces, this establishes the Widget contract alongside the type that implements it. But how can we predict which methods consumers actually want to use? Especially as the type grows functionality and our interface grows methods, we lose utility. The bigger the interface, the weaker the abstraction.

Instead, consider having your constructors return concrete types and letting package consumers define their own interfaces as required. For example, consider a client that only needs to Bop a Widget.

func main() {

w := foo.NewWidget() // returns a concrete *foo.Widget

process(w)           // takes a bopper, which *foo.Widget satisfies

}



// bopper models part of foo.Widget.

type bopper interface {

Bop(int) int

}



func process(b bopper) {

println(b.Bop(123))

}

The returned type is concrete, so all of its methods are available to the caller. The caller is free to narrow its scope of interest by capturing the interesting methods of Widget in an interface and using that interface locally. In so doing the caller defines a contract between itself and package foo: NewWidget should always produce something that I can Bop. And even better, that contract is enforced by the compiler. If NewWidget ever stops being Boppable, I’ll see errors at build time.

More icing on the cake: Testing the process function is now a lot easier, as we don’t need to construct a real Widget. We just need to pass it something that can be Bopped—probably a fake or mock struct, with predictable behavior. And this all aligns with the Code Review Comments guidelines around interfaces, which state that:

Go interfaces generally belong in the package that uses values of the interface type, not the package that implements those values. The implementing package should return concrete (usually pointer or struct) types: that way, new methods can be added to implementations without requiring extensive refactoring.

Sometimes there is a case for defining interfaces near to concrete types. For example, in the standard library, package hash defines a Hash interface, which types in subsidiary packages implement. Although the interface is defined in the producer, the semantics are the same: a tightly scoped contract that the package commits to supporting. If your contract is similarly tightly scoped and satisfied by different types with different implementations, then it may make sense to include it alongside those implementations as a signal of intent to your consumers.

The Actor Pattern

Like dependency injection, the idea of the actor pattern can mean wildly different things to different people. But at the core, it’s not much more than an autonomous component that receives input and probably produces output. In Go, we learned pretty early on that a great way to model an actor is as an infinitely looping function selecting on a block of channels. The goroutine acts as a synchronization point for state mutations in the actor, in effect making the loop body single threaded—a huge win for clarity and comprehensibility. We typically name this function `run` or `loop` and define it as a method on a struct type that holds the channels.

type Actor struct {

eventc   chan Event

requestc chan reqRes

quitc    chan struct{}

}

func (a *Actor) loop() {

for {

select {

case e := <-eventc:

a.consumeEvent(e)

case r := <-requestc:

res, err := a.handleRequest(r.req)

r.resc <- resErr{res, err}

case <-quitc:

return

}

}

}

Finally, we push onto those channels in our exported methods, forming our public API, which is naturally goroutine-safe.

func (a *Actor) SendEvent(e Event) {

a.eventc <- e

}

func (a *Actor) MakeRequest(r *Request) (*Response, error) {

resc := make(chan resErr)

a.requestc <- reqRes{req: r, resc: resc}

res := <-resc

return res.res, res.err

}


func (a *Actor) Stop() {

close(a.quitc)

}


type reqRes struct {

req  *Request

resc chan resErr

}


type resErr struct {

res *Response

err error

}

This works great in a lot of circumstances. But it does require us to define a unique channel per distinct public API method. It also makes things a little tricky when we need to return information to the caller. In this example, we use an intermediating `reqRes` type, with a response channel, but there are other possibilities.

There is an interesting alternative. Rather than having one channel per method, we use a single channel of unadorned functions. In the loop method, we simply execute every function that arrives; the exported methods define their functionality inline.

type Actor struct {

actionc chan func()

quitc   chan struct{}

}


func (a *Actor) loop() {

for {

select {

case f := <-actionc:

f()

case <-quitc:

return

}

}

}


func (a *Actor) SendEvent(e Event) {

a.actionc <- func() {

a.consumeEvent(e)

}

}

func (a *Actor) HandleRequest(r *Request) (res *Response, err error) {

done := make(chan struct{})

a.actionc <- func() {

defer close(done) // outer func shouldn't return before values are set

res, err = a.handleRequest(r)

}

<-done

}

This style carries several advantages:

  1. There are fewer mechanical bits in the actor.
  2. We have much more freedom in the public API methods to return values to callers.
  3. Business logic is defined in the corresponding public API method, rather than hidden in an unexported loop method.

Your Patterns

The dependency injection, client-side interface, and actor patterns can improve your productivity, offer you more freedom, and optimize your programming. Rather than default to basic solutions, get creative and try out one or all of these three distinct Go patterns.