Wednesday, 25 July 2018

The Spotify's Engineering Culture. My interpretation and summary.

I've come across the so called "Spotify model" several times. Pretty much every organisation I am working with is using it one way or another either as inspiration for their target organisation or as an example of what they would like their IT culture to be like.

Thanks to the brilliant 2-part video posted by Henrik Kniberg, I was able to listen, visualise and truly digest what this engineering culture actually means from a organisational, technological and people/culture point of view.

To this end, I've created the following presentation with the intention to also share my interpretation of their engineering approach.

The Spotify engineering culture empowers its people at many different levels as it provides a very good balance of freedom and structure. It’s open approach towards collaboration, respect and trust, ensures that Squads are align, share knowledge and experiences, thus avoiding common pitfalls –whilst not reducing the amount of innovation.

Their experimental and “fail fast-learn fast-improve fast” culture is an engine for innovation as teams are encouraged to try new ideas out, without being worry of being punished if some of the ideas fail.

Spotify’s decoupled architecture (probably based on Microservices although not explicitly mentioned) is most likely a result of their engineering culture, as opposed to purely driven by technology and/or architectural preferences. Can’t help it but to say it’s Conway's law in action.

This model however, is not for all organisations and many will find it very difficult to adopt. Specially large traditional corporations where the level of politics and bureaucracy is so high that change take ages to occur, shifting to the Spotify way of doing this will be a huge undertaking.  For such [traditional] organisations, keeping pace with more innovative companies (those that do succeed in adopting a Spotify like model) will be a struggle. On the flip-size, large organisations that do manage to shift, will be able to benefit from their size and market reach plus the agility, speed and innovation enjoyed by the likes of Spotify. Only time will tell!!.

Tuesday, 15 May 2018

A comparison of push vs phone-home communication approaches between API Gateways and Management Services

API Gateways deliver critical runtime capabilities in enterprise-wide API management infrastructures. However, such runtime capabilities must also be complemented with other design-time and governance capabilities in support of activities such as APIs lifecycle management, API design, policy definition and implementation, deployment, retirement, monitoring, and so on.

The aforementioned design-time/governance capabilities, are often offered by different API management vendors as a separate Management Service infrastructure that augments/complements the runtime infrastructure (API Gateways). Needless to say in order for runtime and design-time/governance infrastructure to work together cohesively as a collective whole, there must be some sort of effective and reliable communication between these two main components.

Whereas some products like for example the Oracle API Platform Cloud Service, deliver a phone-home approach for API Gateways to communicate with the management infrastructure, other vendors implement a push approach whereby the Management Service is responsible for establishing and handling the connection to the API Gateways.

Both approaches are fundamentally different and understanding how such differences can impact/influence a solution becomes even more critical as the need for API Gateways increase e.g. as a result of  adopting cloud or Microservices Architectures.

Furthermore, as cloud adoption continues to rocket, vendors also offer Management Service capabilities as a PaaS cloud service. This is important and not trivial as it means that communication between the PaaS-based management infrastructure and the API Gateways must be in placed prior implementing the solution.

This article compares these two main communication strategies and highlights key differences including pros and cons (from the point of view of the author).

Installation / Configuration:
When installing/configuring an API Management solution, both Management Service and API Gateways should be provisioned and configured so they can talk to each other. The communication style can have a considerable impact in the steps required in order to implement the solution (e.g. open firewalls, etc).

  • Pros:
    • No obvious benefit.
  • Cons:
    • May require several network / firewall changes in order to allow inbound connections to the API Gateways if the same reside in a DMZ e.g. on-premises. This may be even more complicated if the the ports used are non-standard and random.
Phone-home (pull):
  • Pros:
    • Installation / configuration of the API Gateway "might" not require any major changes in networks/firewalls as connections are initiated by the API gateways to the management service (in principle) using standard ports (e.g. HTTPS/443).
  • Cons:
    • Outbound connectivity to the Management Service must be in place. If the management infrastructure resides in the cloud, this means outbound internet access is required. Not necessarily a con but an important consideration.
Deployment of APIs:
Once APIs are defined along with their relevant policies (e.g. OAuth, API key validation, throttling, rate-limiting, API plans, etc) they are deployed to the relevant API Gateways. The more API Gateways an API have to be deployed to, the more complicated and error prone the process can be.

  • Pros:
    • The main benefit is that deployment of APIs occur immediately as soon as a deployment task is initiated. This is because the Management Service initiates the connection to the API Gateway.
  • Cons:
    • As direct connection to the API Gateway is required, some vendor offerings might require the Management Service and API Gateways to reside in the same network segment. This is an important constraint and it also means that as more API Gateways are required (e.g. in the cloud or different locations), additional Management Services have to be implemented which is not ideal and introduces additional management overheads, complexity and costs.
    • Depending on the size of the solution and how many Management Services are in place, deployment of a single API to multiple API Gateways in one-go, can be a non-trivial task.
    • Issues in communication between Management Service and API Gateways may only become evident during the deployment of APIs, which isn't ideal.
Phone-home (pull):
  • Pros:
    • It is the API Gateway’s responsibility to phone-home and download/configure APIs, typically at the pre-defined interval (e.g. every minute, every-hour).
    • The internals also act as heart-beat. So issues in communication between API Gateway and Management Service should become evident rather quickly and not during a deployment.
  • Cons:
    • Deployments may take a while to complete if pre-defined time internals are long (basically it won't be immediate).
Infrastructure Topology:
As briefly mentioned previously, how the management infrastructure and the API Gateway communicate, does impose important considerations in regards to the overall API management infrastructure topology and options available.

  • Pros:
    • Typically the Management Service can be installed and configured by the client in any infrastructure of choice. This can be beneficial if full API Management infrastructure is is required in a close-loop network with several constraints (e.g. A cruise ship) 
  • Cons:
    • Potential proliferation of management infrastructure as more than one Management Service might be required.
    • Increased complexity in the management of the infrastructure and most likely also an increase in costs.
Phone-home (pull):
  • Pros:
    • Typically it means that API Gateways can share a single management infrastructure, which simplifies the solution and reduces the management overhead and costs.
  • Cons:
    • If the management infrastructure is only available as a PaaS capability, for solutions with special network constraints or requirements this might not be an option.
Solution at Scale:
Some very large organisations may have the need to implement API Gateways at very large scale (e.g. hundreds to even thousands). In such large-scale implementations, every small factor becomes an important consideration, and how Management Services and API Gateways communicate, is certainly of them.

  • Pros:
    • If the solution requires (could be for organisational reasons) to have multiple and separate management infrastructure, a push model could work well.
  • Cons:
    • A proliferation of management infrastructure results in overall higher TCO and more complexity.
    • Additional tooling may be required to provide some sort of centralised monitoring capabilities so the overall API management infrastructure can be monitored.

Phone-home (pull):
  • Pros:
    • This solution is more easily scalable as more API Gateways can be added without necessarily having to also increase the number of Management Service.
    • Simpler solution to operate given reduced number of Management Services.
    • Deployment of APIs becomes a much easier task. A single Management Service can deploy to several API Gateways deployed in many locations easily.
  • Cons:
    • A single management infrastructure, could also introduce a single point-of-failure. Therefore adequate high-availability infrastructure must be in placed.

In the majority of cases the business or IT management won't be interested in understanding how API Gateways and management infrastructure interact. However a different story emerges if the implications of such communication are presented in terms of TCO impact, scalability of the solution and business agility. This article summarised some of these implications in terms of installation/configuration, deployment, infrastructure topology and scalability. But ultimately what solution (push vs phone-home) works best for your organisation should be determined by what business requirements is driving the need for APIs and their management, and how each approach can best help realise such benefits.

Friday, 2 February 2018

Is BPM Dead, Long Live Microservices?

With the massive uptake of Microservices Architecture -industry wide- and with it, the adoption of patterns such as Event Sourcing, CQRS and Saga as the means for Microservices to asynchronously communicate with each and effectively "choreograph" business processes, it might seem as if the days of process orchestration using BPM engines (e.g. Oracle Process Cloud now also part of Oracle Integration Cloud, Pega, Appian, etc) or BPEL (or BPEL-like) engines are over.

Although the use of choreography and associated patterns (such as the aforementioned) makes tons of sense in many use cases, I've come across a number of them where choreography can be impractical.

Some examples:
  • Data needs to be collected and aggregated from multiple services -e.g. check the Composition pattern. Note that this pattern doesn't necessarily implies that an orchestration is required. Could be that data is collected and aggregated (not transformed) into a single response. But if data collected from multiple sources needs to also be transformed into a common response payload, then it feels pretty close to one of the typical use cases for orchestration.
  • The process is human-centric and can't be fully automated. Basically at some point a human has to take an action in other for the process to complete (e.g. approval of a credit card application, or a credit check) -BPM/Orchestration tools tend to be quite good at this.
  • There is a need to have very clear visibility of the end to end business processes. In traditional BPM tools, this is fairly straight forward, with Choreography / Events, although possible to monitor individual events, a form of correlation would be required to build an end to end view on the status of a business process.
It was "perhaps" for some of these reasons, that Netflix developed their own process orchestration engine called Netflix Conductor (now also open sourced). Their reason for developing this tool, with their own words:

"With peer to peer task choreography, we found it was harder to scale with growing business needs and complexities" --read this link for the complete article.

And it's not just Netflix, the like sof Camunda, Zeebe, Baker, seem to have spotted the need for such microservices oriented process engines, and thus their solutions fits well with this architectural style.

This is also one of the reasons why has the concept of semi-decoupled services, in other words, a service that's not entirely independent either because it runs on a share runtime or because it conducts an orchestration and therefore has runtime-coupling to other services. This compared to a fully-decoupled service that only implements choreography to interact with other services (aka Microservice).

Sample Use Case
In order to better illustrate what's being said, take the following sample use case:
  • A simple Credit Check process that determines if a customer credit score is adequate or not for a given transaction
  • In certain scenarios (e.g. just above threshold credit score), a manual human intervention is required to accept or reject an application.
  • The process can be implemented in a number of ways:
    1. As an orchestrated (synchronous) business process
    2. As a choreographed (asynchronous) business process - no process engine
    3. Choreographed but with process engine
let's have a deeper look:

1) Orchestrated (Synchronous) Credit Check Process
An orchestrated business process implemented with a traditional process engine tool of choice and synchronous as both request/reply would be within the same HTTP thread. As it's notable in the diagram, this is not really dramatically different from a traditional SOA architecture. The process could be a BPMN 2.0 engine or a BPEL orchestration tool (as many support human workflows).

Main advantage of this approach is that process metrics are clearly visible end to end plus it would be fairly straight forward to implement, including important capabilities such as exception and compensation handling. However, performance and scalability wise, as all HTTP requests are synchronous, if threads can't be served rapidly they could accumulate becoming a bottle neck (e.g. hang threads).

It's worth also noting these advantages are true so long that a process engine is already available for use, and it supports REST as entry point to trigger a process. Shall this not be the case, the effort involved in standing up new infrastructure would probably counter the benefits and other options might be more viable.

2) Choreographed (Asynchronous) Credit Check Process –no process engine
This alternative would be a fully choreographed thus asynchronous business process, purely based on a Microservices Architecture. No process engine used in this option. Services are completely independent and only communicate to each other using events via an event hub.

The main benefit of this option is the flexibility, extensibility and adaptability it delivers. Because all interactions would be via events, services are decoupled from one another, so therefore can be developed, deployed, tested and scaled independently. furthermore, this architecture, if done well, could scale to handle very large throughput as each service can independently scale.

However this approach would be complex to implement given the increased number of services and events. Coordinating the sequence of actions > events requires careful modelling. Also getting end to end visibility of the business process wouldn't be straight forward unless additional tolling or custom solutions are adopted. In addition, for the human workflow bits, a custom additional web application would have to be developed, no simple task if the approval workflows are completed (at this point the question would come: why reinvent the wheel when process orchestration engine do this out of the box and quite well?).

Lastly, it's also worth noting that in order to be able to effectively "call back" to the consumer application, some sort of callback handler implemented for example using Websockets or Server-Side Events would be required. This isn't necessarily simple and therefore would add to the complexity.

3) Choreographed (Asynchronous) Credit Check Process –with process engine
This option can be seen as the best of both worlds. Event Sourcing is still adopted as a pattern, however instead of having only services react to events in order to accomplished all desired process steps, a business process implemented in a process engine is adopted such as it can execute all desired steps by publishing or subscribing to the relevant steps. Using modern process engines such as the aforementioned, the process it self could either be compact enough to be deployed to its own runtime meaning it would effectively be Microservice in its own right, or like it's the case in Netflix Conductor, multiple (work) microservices could independently interact (via events) by interacting with Conductor's worklist and queue services.

Alternatively only a portion of the process could be implemented in the process engine e.g. the Human Workflow bit given that this feature alone would safe considerable effort if the tool does it out of the box.  Key to get this right, is to ensure that the process itself can be map to a single bounded context, and doesn't multiple ones as that would break one of the main principles of domain-driven design, fundamental in Microservices Architectures. As it was well stated by Bernd Rücker from Camunda (another great BPM engine that aligns well to Microservices Architectures), a much better way to define business processes is to "cut the end-to-end process into appropriate pieces which fit into the bounded contexts" and therefore aligning well to Microservices Architectures.

There are many benefits in this approach. For starting, it would be simpler (thought not simple) to implement to the previous option -but not as scalable and flexible. However because better visibility of the business process analytics would be available, it would perhaps compensate for the drawbacks.  In addition to the fact that be-spoking a custom web app for approvals wouldn't be required, certainly an option to consider.

Lastly, as as per previous approach, a call back handler would also be required in this approach.

Comparing the 3 approaches
The following table makes it simpler to visualise the pros/cons of each option:

Orchestrated (Synchronous) Credit Check Process
Choreographed (Asynchronous) Credit Check Process –no process engine
Choreographed (Asynchronous) Credit Check Process –with process engine
(++) Less complex. Fairly straight forward to implement. Known pattern.
(--) Complex to implement (many moving pieces) with additional technologies and considerations. Increased number of services and events to handle + plus a custom web application for human workflow.
(+-) Simpler not simple specially when compared to a purely choreographed process.
Scalability (ability to scale and handle high-throughout)
(--) Process can become the bottle neck if many parallel threads need to be handled.
(++) Very scalable and can handle large throughputs as each service can scale fully independently. Fully decoupled architecture.
(+-) Even though Services can scale independently, process ”could” become a bottleneck (depending heavily on what process engine is used). However because process is asynchronous it could handle more parallel threads than a synchronous option.
Visibility of end to end process
(++) Process can be monitored end to end. Which can be very useful from a business standpoint.
(+-) Visibility of an end to end process not as straight but still possible if right tooling is used.
(+) Process can be monitored almost end to end. Which can be very useful from a business standpoint.
Flexibility (ability to independently change/deploy components without affecting others)
(+-) Any change to an API consumed by the process will directly impact the process itself. This could be avoided by adding a virtualisation layer in between, but would result in additional complexity.
(++) Very flexible. Runtime decoupling via events. Almost all components can be evolved without impacting others (provided events are kept consistent)
(+) Good flexibility. Runtime decoupling via events. Almost all components can be evolved without impacting others (provided events are kept consistent) however majority of process still dependent in central process engine.

There are no silver bullets. No exceptions in this case. However once again, I think Netflix with its Conductor "microservice" orchestrator is changing the ball game on what we thought would be acceptable in a Microservices Architecture.

That said and answering the question in the title of this article, I don't think the days of BPM / process orchestration are dead per say. What I do think though, is that the way process orchestrations are implemented and processes modelled should (and probably will) change to to be more microservices / event oriented therefore be able to take part in a Choreography.

However equally important that the anatomy / underlaying architecture of more traditional process engines also changes (evolves) not just to cope much higher throughputs but also support the distributed deployment model, e.g. each process within its bounded context, could be packaged and deployed it its own runtime.  On the mean time, I hope that some of the approaches described in this article provides some inspiration on how to try and combine two paradigms that until recently (at least two me) seemed completely incompatible.

Lastly, I would like to thank Lucas Jellema, Lonneke Dickmans and specially Guido Schmutz and Sven Bernhardt for their valuable contributions to this article.

Sunday, 12 November 2017

2nd vs 3rd Generation API Platforms - A Comprehensive Comparison

Earlier in the year, June to be exact, I published the OTN article 3rd-Generation API Management: From Proxies to Micro-Gateways -based on a concept I presented in an Oracle OpenWorld 2016 Presentation titled API management in the year 2026.

In summary, the article talks about how in the digital era requirements have changed when it comes to getting access to information. Fact that modern applications demand information in real time and expect it to be available 24x7, and also the side effect of cloud adoption which is causing information to become more and more federated amongst multiple cloud applications (from different vendors) and on-premises systems -as they won't go away that easily.

The article continues by explaining that although REST APIs play a critical role in providing such critical access, the underlaying technology stack that have traditionally enabled SOA architectures, typically based on monolithic middleware, will struggle to satisfy the aforementioned needs. The article concludes by explaining that a new form of platform, one that is light-weight, portable, hybrid and cloud-agnostic, REST/JSON native, Microservices Architecture ready, a 3rd generation, is ultimately required.

You might recognised the following picture from some tweets and articles:

2nd vs 3rd Generation API Platforms Conceptually
Although the article has been very well received, one question I get asked frequently is around what exactly is difference between 2nd and 3rd Generation API Platforms. Furthermore, I took notice that the question is often asked for one of the following reasons:
  1. An investment has recently been made in an API Management stack and there is a desire to understand if it is 2nd or 3rd generation, or at least verify if the desired capabilities are in the product roadmap.
  2. The need for a more sophisticated API platform, one that can go beyond traditional use cases (i.e. API Gateway in DMZ) but also easily satisfy hybrid (cloud/on-prem) scenarios, has been recognised.
  3. A Microservices Architecture initiative has been kick-started and the need for an API Gateway has been identified, however individuals within the initiative favour modern, lightweight, scalable, and affordable solutions over traditional, monolithic , heavy and also expensive API gateways.
That said and in order to address this question, I am sharing a comparison based on the second chapter of my coming book Enterprise API Management -where I talk about my learnings, approach and experiences implementing API strategies and platforms for large organisations.

Characteristic 2nd Generation API Platforms 3rd Generation API Platforms
What is it? Based on a monolithic stack. Most-likely derived from an ESB or an adaptation/extension of XML-based appliances. Support for Web APIs (REST/JSON) and associated technologies comes as an ad-on as capability wasn't originally in the product.  Built from the ground up to support Web APIs and modern API requirements such as API-first, hybrid deployment models, elastic scaling and Microservices Architecture. They tend to be lightweight and portable. Many based on async I/O based  engines which makes them extremely efficient and scalable.
Business value of APIs 2nd Gen API Platforms pre-date the rise of the API economy therefore capabilities available in this space (i.e. monetisation) will be, as earlier said, an adaptation/extension of the tooling, not out of the box. Built from the ground up to help organisations realise the business benefits that APIs have to offer. 3rd Gen API Platforms are rich in capabilities to help the business (not just IT) realise benefits. For example, better discoverability of APIs via API developer portals, API specific web-pages, developer self-service and API monetisation.
Deployment model: Centralised vs Federated Typically centralised -most likely in on-prem DMZ infrastructure. Deployments with multiple instances across many different data-centres and geographies are not the norm. Design with federation in mind. Deployment of gateways in multiple data centres and geographies fairly straight-forward, not a massive and complicated engineering undertaking.
ESB-less / XML appliance-less As earlier said, 2nd generation tend to be extensions/adaptations of monolithic stacks such as ESBs or XML appliances. For this reason, satisfying modern requirements such as independent runtimes, rapid scaling and elasticity, rapid installation, multi-cloud and on-premise (hybrid) deployments, will be difficult to satisfy. API platforms built from the ground up to support modern API requirements. Not an adaptation of old technology. Supporting requirements such as rapid scaling, use of Docker containers, infrastructure as code (meaning automating the entire setup/config/maintenance process) and hybrid deployments are one of the key differentiators.
Fat vs light middleware Comparable to a Swiss army knife, 2nd generation platforms can do a lot. From message routing and data transformation, to complex service orchestrations and business rules. Thus in order to deliver such diverse capabilities, the footprint of the platform tends to be large. Such rich capability also means that the Gateway itself is used for all sort of things, including implementation of business logic implementation -considered a bad practice in modern architectures (smart endpoints and dump pipes). Laser focus on satisfying API mediation requirements such as HTTP routing, Authentication/Authorization, API throttling, rate limiting, API load balancing and a few others, but certainly not to satisfy complex data transformation, orchestration or business logic requirements. For this reason amongst others, 3rd Gen Gateways are lightweight (definitely lighter weight than 2nd Gen).
Native Hybrid (any cloud + on premises) On-premise technology by birth. Cloud (if supported) resulting from an adaptation or a cloud-wash. Several constraints most likely imposed by vendor if cloud deployment is supported.

Also note that 2nd Gen gateways claim to be docker-container ready, however when digging deeper one may find that use of containers is either not recommended for production, or the size of the container itself is so large that makes them impractical for use.
Can be deployed in any infrastructure, cloud (any vendor) and/or on-premises without much complication. Straight forward deployment. Docker containers support out of the box.

Native end to end /JSON Not native, in fact many 2nd generation Gateways will convert JSON to XML (or something else) internally for processing and then back to JSON. A good sign that an API Platform is a 2nd Generation one, is the rich support available for XML-based standards such as SOAP/WSDL and WS-*.  Understands/handles JSON end to end natively because it was built for this. Full support for standards such as OpenAPI (aka Swagger)  and API Blueprint in place by default.

XML support rare (perhaps just for converting SOAP to JSON).
API-design first As an extension, not native. In many cases simply not available. There tends to be a lack of proper API design and documentation capabilities which means that many of the APIs are not properly documented. Which means developer experience is not the best. Fundamental part of the product. Rich support for API-design first therefore supporting open standards such as OpenAPI and API Blueprints for defining and mocking APIs. Capabilities to create API dedicated web-pages and publish them in self-service developer portals is the norm.
Microservices Archiecture Precedes Microservices Architectures. Wasn't built for it and therefore there is a natural mismatch in terms of what a Gateway should and shouldn't do. For example, in Microservices Architectures, one would expect the heavy lifting to be implemented in the service, not a Gateway. Therefore most of the capabilities of 2nd Gen Gateways wouldn't (shouldn't) be used. Microservices Architectures one of the core use cases for 3rd Gen API gateways address. For example service discovery and client-based load balancing is a core capability of such gateways. Also 3rd,Gen API Gateways are very lightweight and won't force any specific service implementation stack. Ideal for polyglot programming which is one of the fundamental principles of Microservices Architectures.
Software release Traditionally an on-premise offering, therefore not cloud-first. Typically software patches/upgrades/new releases or have to be downloaded and installed -which could be a complicated task. For this reason software can rapidly become outdated. Also major updates/releases not too frequent (some times once a year). Cloud-first. Release cycles are frequent (monthly or quarterly), and rollout process straight forward. In most cases happens automatically and fully executed by cloud vendor. In other cases new patch/releases are container-based.
Licensing model Typically CPU-core based. Very expensive when it comes to large deployments. Subscription model based on used (i.e. throughput based). Not CPU based.
Full API lifecycle Partly. Requirements such API-first, control-planes (see next point), API-spec validation, Registry-based Service discovery (in a Microservice context), not typically available into the product but require extensions. API lifecycle tends to be fully separated from the Service lifecycle. Either already out of the box, or easily extensible to be combined with other solutions or already considered and visible as part of the product backlog (roadmap)
Control plane API / DevOps ready If control plane APIs (also known as management APIs) exist, these were built afterwards and not all functionality available via UI consoles is available via APIs that can be used for a control plane. Products tend to be API-first, meaning that all sort of functionality is also accessible via the product API. Therefore ideal for Infrastructure as Code and DevOps.
Centralised management and analytics Each domain and/or cluster would typically have its own central management and analytics console. So in large deployments, additional software, one that can plug into all these domains and clusters, would be required. Part of the product architecture by design. 3rd Gen platforms are built for federation and scalability. Many 
Developer centric Tend to be complicated, both in terms of use and setup/config. Typically product SME's are required to install and support the platform.

Also developer-centric tools such as developer portals, tend to be an add-on of the core offering, they are installed and configured separately, not an intrinsic part of the platform.
Built with Developer Experience in mind. The tooling usability is meant to make the life of the developer easier. Does not require product SME's to set up, use and/or configure -just good developers. Developer centric tooling such as API documentation and developer portals are fundamental components of the product architecture.

Sunday, 8 October 2017

My key takeaways from Oracle OpenWorld & JavaOne 2017

I've just literally arrived from Oracle Open World and JavaOne 2017 and my head still hurts with so many interesting announcements and cool new things I want to get my hands on and learn. This blogpost provides a short summary of my impressions and key takeaways from both events.

General overview:

This year OOW and JavaOne was full of changes from previous ones. For starting most of the sessions took place between Moscone South, Moscone West and the Marriot Marquis, as opposed to all over the place. I understand that this was mainly due to renovations that took place in Moscone which meant that more rooms were available.

JavaOne this year took place in Moscone West (as opposed to Hilton Union Square). First thing that really stroke me was the vast amount of people that seemed to have attended the event (see below tweet from Adam Bien). Not sure if more people attended JavaOne than OOW, but my first observation is that sessions in JavaOne were better attended than those in OOW (at least in the areas am interested on and from what I could see -this is a personal view so don't get offended if you disagree).

My second observation from this year's event was the increased focus to the Developers audience. A clear change of direction from previous years (in my view for good), and it shows that Oracle is committed and trying hard to engage the broader developer communities (not just Oracle's traditional one). In my view Oracle is taking solid and promising first steps towards achieving this goal and hopefully this article highlights some of them.
Overall, really liked the vibe of the event, specially in Moscone West. I was also very pleased to see an open source project am part of ( been mentioned a few times :)

Lastly I am happy about how my four sessions went (uploaded the decks already so hopefully they will be made available soon), but specially very happy about the last one (only one I did in JavaOne) as it was well attended even though it was the last of they (before the concert). Developers care about APIs :)
Following my key takeaways in the different areas am interested on:

Key takeaway on application development space:
  • Project FN: Recognising that one of the key challenges in the serverless space is lack of cross-vendor standards, Oracle announced the release of a new fully open source serverless project named named Project FN. This is a solid attempt (also well received by many -including myself) to fill a gap in the industry and create a non-proprietary, Java based, solution for serverless applications that can run in any cloud and on-premises. I personally find this a supper exiting announcement and can't wait to get my hands into it. 
  • Oracle Container Native Application Development Platform: based on the Cloud Native Computing Foundation (CNCF) project, this open source platform delivers a comprehensive, complete and robust solution for building cloud-native microservices anywhere (any cloud and/or on-premises).The solution is composed of the following services: 

My key takeaways in the Integration space: 

  • Oracle Integration Cloud: not to be confused with ICS, this is a new offering that brings together multiple integration products to deliver a comprehensive and complete iPaaS platform. I am actually very pleased about this as I had previously wrote about what an iPaaS platform should look like (click here for the article) and this new offering definitely addresses it.
  • I am a big fan of APIs (if you know me then this is not news!) and I was also happy to capture some key new announcements for the recently launched Oracle API Platform Cloud Service
    • API Plans will be available in about 3 months and will allow for different charging models to be applied to APIs.
    • Native OAuth integration. Not just an OAuth policy (as most API gateways support), this is full blown OAuth Authentication Service (AS) capability that will make it a lot easier to implement different OAuth authorisation flows.
    • Notifications and web-hooks -really cool feature.
    • Partnership + integration with the following products: 
      • API Fortress: For full end to end functional testing of APIs (i.e. OAuth login flows, multiple API calls, etc) 
      • APImatic: for creation of client SDK’s for APIs (in multiple languages) from the API Platform developer portal 
      • Both the above will be sold separately however at a proportional price of platform 
    • Ongoing commitment to continue to fully integrate Apiary into the platform was reiterated several times, including licensing wise (which has been a bit of hassle until now). I am also positive that although Apiary will be fully integrated with the broader API platform,  it will continue to be sold separately so developers using non-Oracle API gateways can continue to leverage Apiary design-first capabilities.
  • Oracle Integration Platform Cloud (OIPC): A comprehensive data integration solution that brings together into a single managed platform in the cloud: Oracle Data Integrator (ODI), Oracle Golden Gate and Oracle Enterprise Data Quality (this latter as I understood only for the OIPC Governance Edition) 
  • Oracle Application Container Cloud supports for Java EE and Go Land. Not a huge announcement but interesting indeed

Headline keynotes takeaways:

  • Larry's main announcement this year the Oracle’s Autonomous Database: from 18c Oracle introduces the world's first autonomous database that claims to Fully automated patching, upgrades, backups, and availability architecture perform all routine database maintenance tasks—without the need for human intervention 
  • Universal credits: put simple, monthly or annual credits that can be used across all Oracle PaaS/IaaS services (including Cloud@Customer). This is an interesting announcement as it'll make the process of purchasing cloud services a lot easier and will also ensure that credits are actually used.
  • Bring Your Own License (BYOL): ability to re-purpose existing on-premise licenses of Oracle database in Oracle PaaS.
  • Lastly also sharing some useful tweets that summarise some of my favourite keynotes:
    • Oracle JavaOne keynote:
  • Oracle PaaS key announcements by Amit Zavery:

Wednesday, 10 May 2017

Oracle API Platform Cloud Service Overview

Oracle has recently announced the release of Oracle API Platform Cloud Service.

Here the official press release.

This new platform -not to be confused with Oracle's previous solution, has been built almost entirely from the ground up to satisfy modern API management requirements.

I have been lucky enough to be part of the beta programme and have actually been implementing the product for the last 4 months or so (but trying it for almost a year now). In this blog post I share some of the insight and experiences I've gained in the process.

What is the Oracle API Platform Cloud Service?
Is a 3rd generation API Platform that delivers a 'true hybrid' model that allows for APIs to be created, deployed and managed centrally (from the Oracle Cloud) whilst API gateways (engines that run the APIs) can be deployed in any cloud (i.e. Amazon, Azure, Oracle Cloud, IBM Softlayer/bluemix, etc) and/or on-premises.

In addition with the incorporation of Apiary into the portfolio, the platform also incorporates a solid/world-class API-first solution so developers also get the tools and means to properly design APIs either using Swagger or API blueprint (Apiary's own API design notation), whilst interacting with the API consumers and therefore ensuring that before any code is built, the API contract is fit-for-purpose.

API Platform Architecture
The platform consists of 7 key components as the diagram illustrates:

  • Management service: The management service is the cloud-based system that underpins the management console, developer portal and platform API. It's the engine of the entire platform. The brains.
  • Management Console:  As the name suggests this is where APIs, Gateways and User/Roles are managed. It's a role-based application so what a user can do pretty much depends on the role the user belongs to.
  • Developer Portal: A web-based application where developers can search and subscribe to APIs. This is where all of the API documentation can be found and also where application keys are provided after a subscription to an API takes place.
  • Platform API: The entire platform was built following an API-first model. In fact, it can be argued that management service is in fact an API, as everything that can be done (and more) via the management and developer portals can be done by directly invoking the Platform API. The platform API is also consumed by the gateways when phoning home to retrieve new API's, policies and also send analytics information.
  • Apiary: As previously mentioned, Apiary is a platform for designing APIs that encourages API designers to maintain an active dialogue with API consumers. Both the management and developer portals are already integrated with Apiary so when a user finds an API in the portal, the API specification (i.e. API blueprint) can also be accessed from one single place.
  • API Gateways: These are the engines that run the APIs and can be deployed anywhere. In any vendor's cloud and/or on-premises. Gateways communicate to the management service iby making API calls (feature known as "phone home"). In this model, it's the gateways responsibility to establish the communication to the "mother ship" (management service) and not the other way around. Because of this, the management of gateways becomes a lot easier as there is no need to open firewall ports (i.e. opening firewall ports) as all communications are outbound triggered.
  • Identity Cloud Service: Most organisations already have their own LDAP directory (i.e. MS Active Directory) where users and roles are managed. The Identity Cloud Service is used to allow the API platform to use an organisation's existing directory as the source for users and roles.
API Platform Roles
The platform by default support 5 types of roles.

  • Administrator: Super user of the platform. Has all rights to deal with user settings and also create/manage APIs and configure gateways.
  • Gateway manager: Role responsable for the gateway operations including deploying, registering, and managing gateways.
  • API manager: The API implementers roles as it gives users full lifecycle rights, including design, create and deploy APIs and also manage the API grants.
  • API designers: Individuals who take on a full or part-time responsibility (i.e. an architect or developer) to define APIs (either in swagger or API blueprints) using Apiary. 
  • Application developer: In other words, these are the API consumers. Users with this role can log into the portal and search/subscribe to APIs.
  • Gateway runtime: Not really a user role, it's a service account used by the gateways to communicate with the to the management service via the platform API. Users assigned this role can’t sign into the Management Portal or the Developer Portal.
User can be created and assigned to any of these roles (excluding Gateway runtime which is a service account). Platform restrictions will apply depending on what role a user belongs to.

Tutorials and Presentations
As mentioned earlier, I've had the opportunity to use the Oracle API Platform for a while now. Below two insightful presentations based the experience implementing the platform:

API Management and Microservices, a Match Made in Heaven
Oracle Code: London, April 2017

Oracle API Platform Cloud Service Best Practices & Lessons Learnt
PaaS Community Forum: Split, March 2017

Other related presentations:

UK Oracle User Group 2016 (Birmingham): Enterprise API Management

Oracle Open World 2016 (San Francisco): Microservices and SOA

Oracle Open World 2016 (San Francisco): Implementing Enterprise API Management in the Oracle Cloud

Oracle Open World 2016 (San Francisco): API Management in the year 2026

AMIS Beyond Horizon (Utrecht) Microservice Approach for Legacy Modernisation

Since I got my hands into this product, I have been really impressed with the elegant, simplistic yet powerful architecture of the Oracle API Platform. It's a considerable step forward from it's predecessor solution but most importantly it was built with modern requirements in mind -meaning that the product doesn't really have any major baggage.

The platform in addition to be lightweight does not enforced an API implementation path. Customers will not be locked into an end to end vendor-stack. For example when using the Oracle API Platform, API applications can be implemented using any technology of choice. This is ideal in microservice architectures where the majority of developers prefer a polyglot programming style. Other vendors for example will force you into a specific implementation path to implement, test and deploy your API applications -which results in vendor lock in.

Because gateways are also fairly lightweight (although I've already heard that future releases of the gateways will be even more lightweight), they really are microgateways and cannot be compared to the traditional appliance-centric, heavy-weight, second generation API platforms.

One more feature that really makes the product unique is the "phone-home" feature of the gateways. What it means is that gateways make a call to the management service to get all instructions regarding APIs to deploy and policies to apply. Meaning that more and more gateways can be added without the typical operations burden of opening firewall ports and troubleshooting failed deployments by looking at logs...

Lastly, the acquisition and incorporation of Apiary into the solution, truly is the icing on the cake! as the solution not only has a simple -yet robust runtime environment, but also a best-in-class API-first design capability.

Well done Vikas, JakubDarko, and Robert and the rest of the Oracle product development and engineering team for finally releasing a world-class future-ready API platform.

If you need help or want to know more about Capgemini's Oracle API Platform offering, please refer to this link.

Friday, 3 March 2017

iPaaS. What is it exactly? is it on-premise software running on IaaS?

As cloud adoption continues to rise, the so called 'second wave' of cloud computing becomes less of a prediction and rather a reality we have to deal with. In the past 2 years or so for example, almost in every customer engagement I've had, 'the cloud' has been at the very least a topic of discussion. In most cases it has actually been within the scope of our activities.

This is not surprising of course as the term 'cloud' itself can mean ten different things to ten different people. The sad part is though, that is has been years since the first wave of cloud (started by Amazon) and there's still a fair degree of confusion in the topic.

In fact, I still often refer to the NIST definition of cloud to explain what cloud computing and PaaS actually is and how traditional on-premises middleware installed on IaaS isn't PaaS or iPaaS. This is in fact one of the main motivators of this post.


The term Integration Platform as a Service, or just iPaaS, is generally used when referring to integration capabilities delivered entirely on PaaS cloud infrastructure.

In terms of integration capabilities, iPaaS can deliver the same (and in many cases more) capabilities than the ones available in traditional on-premise middleware. Such capabilities should be sufficient to satisfy the main types of integration requirements:

Types of Integrations
It is perhaps because of such similarities that, in my experience, there still is a fair amount of misunderstanding of what iPaaS actually is, what it brings to the table and how it's different to traditional on-premise integration middleware.

Some for example wrongly believe that installing a traditional integration product (i.e. IBM BPM or Mule ESB) on IaaS infrastructure will make it iPaaS.

Well this is far from truth. Let me elaborate on why:

iPaaS characteristics

The following diagram is a summary of NIST definition of PaaS, it characteristics and how iPaaS relates to it:

iPaaS Characteristics
As the diagram suggests, for an integration capability (aka integration platform) to be truly iPaaS, it must comply with NIST's 'essential characteristics of cloud computing'. Following my own interpretation of such:
  1. On-demand and self-service: it should be possible, at any given point in time, to go to a cloud vendor website (i.e., browse through the different iPaaS offerings, select the one wanted/needed, purchase it online, using a credit card of course, and in minutes get an email with all details of the instance (already installed and running), how to access it and even use it. No need to talk with a sales representative, negotiate license costs, provision infrastructure, install/configure the software, and so on.
  2. Broad network access: This is perhaps one of the most important characteristic. Network connectivity to/from iPaaS to other applications has to be fast, reliable and secured. Open internet connection can be unpredictable therefore the cloud vendor must provide alternative means to deliver dedicated high-speed connectivity. For example Oracle Cloud provides a service called Fast Connect, and Amazon a similar one called Direct Connect.
  3. Resource pooling: Compute resources (i.e. cpu, ram memory, disk space) should be allocated on-demand (without human-intervention) based on resource utilisation or via configuration if desired. It should be possible to increase or reduce resources either on demand and/or based on pre-configured rules.
  4. Rapid elasticity: During periods of high demand (i.e. black Friday), predicting resource usage can be almost imposable. A true iPaaS platform should be able to autoscale based on configurable rules and resource demand. Idea is that transaction peaks can be handled by scaling horizontally or vertically -without human intervention. When transaction throughput becomes stable, platform resources should reduce automatically to its original size. It should not be the case that it's possible to rapidly add more capacity, but not the other way around... 
  5. Measured service: pay-per-use / subscription-based charging model with complete transparency and visibility over usage and billing. For example, if a given iPaaS is charged based on number of connections to other applications, it should be possible to know exactly how many connections are being used -at any given point in time. If the number of connections is reaching the limit, a notification should be sent so it's possible to allocate more connections. If on the other hand, the number of connections is in average less than subscribed for, a notification should be sent suggesting to reduce the number of subscribed connections.
  6. Automatic application patching and upgrades: Although not explicitly mentioned, this is another key characteristic and possibly the one that makes iPaaS more distinctive from traditional on-premises middleware. Cloud vendors should be fully responsible for periodically applying patches and/or software upgrades to the purchased cloud infrastructure. It is the vendors responsibility to ensure that patches and upgrades applied are backward compatible and won't break any application.
Considering the aforementioned characteristics, it should hopefully be clear that iPaaS can't be simply delivered by installing traditional on-premise software on cloud infrastructure. 

Be watchful though: Some vendors might try to sell you 'the wooden bicycle' by simply rebranding their on-premises software as iPaaS when all they're doing is provisioning exactly the same on-premises software but on cloud infrastructure (i.e. Amazon EC2). This is known as Cloud Washing.

iPaaS & on-premise integration platforms co-existance

Even though iPaaS can independently satisfy cloud-to-cloud integration needs, it doesn't actually mean that on-premises middleware is no longer required. Quite the contrary.  Most organisations have already made considerable investments in on-premises integration capabilities. In such organisations, even the thought of replacing such capabilities will raise a few eyebrows. 

Instead, on-premise capabilities can continue to be leveraged not only to satisfy on-premises only integration use cases, but also to co-exist with an iPaaS solution to satisfy cloud to/from on-premises integration requirements.

The following diagram illustrates how typical integration patterns can be satisfied with a combination of iPaaS and on-premises integration capabilities.

Hybrid Integration Patterns
Another thing to bear in mind though, given that most software vendors are moving towards a cloud-first delivery model, new products, features, capabilities and even bug-fixes will be made available first to cloud applications and then (if so) to on-premise software.

Therefore in scenarios where for example a new integration capability (i.e. API management) is required which is not already available on-premises, it makes complete sense to consider the adoption of an iPaaS solution instead. Specially those that deliver flexible deployment models whereby runtime engines can be deployed on any infrastructure (cloud and/or on-premises) whilst the management console remains on the cloud.


The adoption of cloud (in all of its flavours: SaaS, PaaS or IaaS) combined with need for organisations to become more digital and user centric, mean that data is not only becoming more and more federated but accessing it in real time and from virtually anywhere is an absolute must. 

However there is no need to 'boil the ocean'. Organisations should continue to leverage their existing integration investments but in parallel define new integration strategies that identify what new capabilities are or will be needed in order to satisfy emerging integration requirements resulting from the adoption of cloud computing and digital transformation.

When possible iPaaS capabilities should be considered not only because they provide more cost-effective licensing models, but also because capability wise, they are better equipped to handle modern and emerging integration requirements. iPaaS also requires less effort to install, configure and run -as most of the work is done by the cloud vendor, including on-going patching and upgrades.

Lastly, avoid cloud washing by ensuring that the selected iPaaS platform satisfies all NIST's 'essential characteristics of cloud computing'. Read my previous blog for a nice comparison of different iPaaS vendors.