Monday 29 August 2022

What is API Product Management?

Recently I've noticed some good discussions in the API community around what API Product Management actually is and what it entails. Of the content I've come across I can highlight What is API Product Management by Bruno Pedro and Erik Wilde and What API Product Managers Need by Deepa Goyal and Kin Lane. 

Given that in Oracle Hospitality I am responsible for leading an entire API and Integration Product Management organisation which manages / oversees over 3k API capabilities and over 2k postman collections publicly available, naturally this topic is of huge interest to me and perhaps needless to say I have my own opinionated views on it which is why I decided to write this blog.

As Erik said in his opening statement "A lot of people are talking a bout API Product Management but I still think a lot of people are not actually practicing API product management", I think this is true in many different ways being one of the main reasons the fact that not everyone is clear about what "Intangible Products" actually are or mean and as importantly, the implications of it especially in terms of investment and organisational attention (e.g. priorities).

So before I continue with my point of view, below an extract from section API Products Target Operating Model of my Enterprise API Management book defining what a product is:

"A product is any good and/or service that satisfies a need (a lack of a basic requirement) or a want (a specific requirement for products or services to match a need) and thus can be offered to a market. From a customer standpoint (someone who acquires the good or service, the buyer), a product is something that delivers a benefit as otherwise there is no point in acquiring it.

Products can be tangible or intangible. Tangible products are physical goods (objects) such as phones, cars, drones, or anything that can be seen and touched. Such products are easily identifiable and don’t really require further explanation. 

Intangible products, on the other hand, are trickier to appreciate as they can’t be seen and/or touched as they are not physical objects. They can be virtual goods such as mobile apps purchased through an app stores, software as a service purchased from a cloud vendor, or services offered by an organization and/or individual such as catering services, hospitality services, and even software development services."

If we apply this definition to APIs then API products can be defined as intangible goods (because APIs are digital goods, not physical objects) that satisfy the needs of application developers and/or product owners who seek quicker access to information, functionality, and/or innovation deemed necessary in order deliver a basic, expected, or augmented product.

I go into A LOT more details in the chapter around what treating API as products actually entails from different dimensions such (team topologies, roles and responsibilities, communication models, communities, etc), but back to the original topic of this blog, what API Product Management actually is? my definition is in fact quite simplistic to begin with:

API Product management is the discipline responsible for the planning, developing, launching (commercials/pricing implications, sales enablement, runtime support, etc), and managing an API product through its full lifecycle including communities around it.

Although simplistic it align wells to the definition of "Product Management" in general. Which is why with my teams I go a level deeper.  If APIs are products then APIs require designated API Product Managers.

API Product Managers are responsible for Good API products that address real market needs (outside-in thinking) and for which there is a known target audience who would be willing to pay (directly or indirectly) for the value they get when using the API.

API Product Managers are responsible for:
  • Understanding the API target audience, its needs and the competitive landscape 
  • Defining API functional and non-functional features roadmap
  • Planning and prioritising API features based on insight
  • Ensuring an API meets all requirements and standards
  • Authoring the required user documentation, marketing collateral, usage samples and recipes
  • Overseeing the full API cycle and ensuring it occurs smoothly and there is no blockers
  • Converting user and usage feedback into creative features
  • Promoting, deprecating, retiring API capabilities based on user needs and usage
  • Evangelising and supporting the API user community.
The above-mentioned definition however only works if there is a clear understanding of what constitutes a Good API.

We define a Good API as one that ultimately offer capabilities that are (ideally) unique and relevant to a known audience but most importantly it gets used because it has a low entry barrier, solves real problems and thus delivers value. Furthermore, Good APIs require careful outside-in thinking so they address real market needs for its audience, when it matters and differentiate thus creating demand for its use and willingness to pay (directly or indirectly) for the value they deliver. Good APIs are products on its own right

Good APIs are by design:
  • Intuitive – easy to use with just few clicks and steps
  • Affordable – low entry barrier, free sandbox, low dev costs
  • Well documented – API spec, good user guides, usage examples/recipes, commercials, limits
  • Discoverable – easy to find capabilities and usage samples
  • Consistent – common semantics and usage experience
  • Performant – fast response times
  • Available – zero downtime
  • Stable – clear versioning strategy, doesn’t break
  • Observable – rich analytics on performance and availability
  • Secured – API OWASP top 10 complaint
  • Well supported – responsive and timely incident resolution
  • Evolvable – evolves based on user and usage feedback.

Last words:

Before talking about API Product Management, it's crucial to have clear understanding not just of what an API Product actually is in theory, but the implications of it especially from an organisational and financial standpoint. The investments required to truly launch successful API products (whether internal and external) are significant and there are no silver bullets or short cuts. There also isn't one way to do it right. In this blog I've shared 'one way'.

That being said, IMHO successful API products have one thing in common, first and foremost they solve real problems and therefore there is real demand for it. Such demand drives investment meaning the product continuously evolves and that is in my opinion a clear distinction between real API products vs APIs that are defined as products in theory but in practice they lack business investment and organisational support.

Monday 7 February 2022

Websocket Streaming APIs vs WebHooks

When it comes to push technology, there are LOTs of options out there. And I do mean lots. The market is hot as you can clearly see in the following picture from my Event driven API Strategies presentation at Nordic API 2019 submit.

However what technology and/or approach to adopt really depends of course on the use case in question and the targeted business benefits for both API producers and consumers. For example we at Oracle Hospitality recently announced a GraphQL Subscriptions  / WebSocket based Streaming API for business event push. Details of the announcement here.

Overall our Streaming API is being extremely well received and there is a huge amount of excitement in our vendor and customer communities alike about it. This is great to see of course given the amount of time and effort that was put into delivering this strategy.

That being said, There has has been a few questions as to why we didn't opt for the more traditional Webhook approach. This article by James Neate offers a great explanation on the reasoning behind our tech strategy (highly recommend the article), however in this blog post I wanted to expand on the specifics as to why we favoured GraphQL subscriptions / WebSockets over Webhooks.

Here are our main 8 reasons:

  1. Webhook requires an API server on the receiver side so events can be pushed back. What this means in practice is that the event consumer must expose an endpoint  (akas call back URL) that can be invoked at any point in time by the event producer to push events. This of course adds additional runtime infrastructure requiring additional API runtime governance, close monitoring and security. All of this naturally incurring additional operating costs. With WebSocket, however there is no need to implement an API server on the consumer side, but just an API client e.g. using the Apollo GraphQL Subscription library if Graphql is being used. An important remark: because we're talking about an API client and not an API server, using serverless infrastructure charged by execution time may not be desirable. Instead consider a different deployment model like for example a container running in kubernetes.
  2. Because callback URLs are actually public endpoints, it means that networks/firewalls have to be configured to accept external calls. It also means that such endpoints are exposed to external security threats and therefore they have to be adequately secured with infrastructure such as WAF and API gateways. With Gartner predicting that API-based attacks will become the most frequent attack vector for applications, this matters a lot.
  3. In a Webhook approach API lifecycle becomes more complicated. This is because the event producer has to define an API spec which must be adopted (by the letter) by all event consumers. Any deviation in structure or behaviour  by the consumer will almost certainly result in issues. Moreover this introduces an additional dimension of complexity in API change management. The server must coordinate carefully any changes made to the spec such as callbacks don't error for example, because the consumer is in an older version of the spec and therefore doesn't recognise a new field added to the payload or new HTTP header.  In a websocket approach on the flip side, the API consumer binds to the server API spec. Therefore so long that the event producer follows good API management practices around versioning, handling change will be simpler on both ends. Change management becomes particularly simpler with GraphQL subscriptions as subscriptions too benefit from the great schema evolvability features available with Graphql (e.g. an event consumer decides when to consume the new field added).
  4. The Webhook server will most likely always push full event payloads, even if the consumer isn't interested in everything. Websocket streaming APIs implementing with GraphQL subscriptions on the other side work just like GraphQL, you only get the data you are interested on which makes it super efficient. Basically users have the ability to 'cherry pick' the event data they're interested on.
  5. Related to the previous point, every event pushed by the event producer actually is a new synchronous HTTP call which isn't very efficient. In an WebSocket approach the server actually reuses a single TCP connection to push events.
  6. Implementing features such as playback events in Webhooks can be very complicated e.g. how does the Webhook server know that the consumer wants to play back e.g. 3 hours worth of events? this means that additional infrastructure will be required by event producer for the event consumer to be able to instruct to the server that it wants to play back events. With WebSocket however, this is much easier to implement. This is because during the initial handshake, the event consumer has the opportunity to pass additional information to the server such as an offset number.  And this is exactly what we did in our Oracle Hospitality Streaming API, we enable consumers to just pass the offset and then the server will just play back events from there. Currently we support up to 7 days of worth of events to play back from.
  7. Another important factor to consider is how to deal with back pressure. With Webhooks there isn't a simple mechanism to deal with it. This is mainly because the server can't easily that the client is suffering from back pressure and because of it doesn't slow down the flow of events being pushed. A common strategy to avoid back pressure with Webhooks tends to be to avoid it all together by means of scaling the Webhook API. But this strategy may not always work as expected (especially during sudden peaks) and can introduce additional challenges such as maintaining message sequence when processing events in parallel. With WebSockets this is a lot simpler because the consumer has the ability to switch off the connection, deal with the backlog of events and then switch it back on once it's ready to handle more events. This combined with the ability to play back events, means that the consumer has better options to deal with back pressure.
  8. And lastly, is the popularity factor. Although a subjective (and for some a controversial) factor, it's clear that GraphQL and WebSocket are both increasing exponentially in popularity -which can't be said for REST and/or Webhooks. So if you're implementing a brand new streaming API that will be around for a while, like in our case, it is important to also consider subjective factors like this too. For many reasons that I hope you appreciate without me having to write them down :)

Thursday 8 October 2020

2020 State of the API Report - My Own Thoughts

This morning whilst going through my emails, noticed in my inbox an email titled The 2020 State of the API Report Is Here. As an API author, practitioner and product owner, this immediately caught my attention and naturally went straight into reading the content of the email.

And well, what I read did not disappoint. The report is the result of interviewing more than 13.5k industry professionals who are in one way or another dealing with APIs.

Kudos to Postman and the API Evangelist for this remarkable contribution (and all of the effort am sure went into putting this together).

Here is the link to the report:

Following my own thoughts / remarks whilst literally going through the report section by section. I am sharing it in case anyone may finds it interesting. I just want to reiterate that these are my own thoughts so you may or may not agree, and that's ok!

Key Findings

Broadly agree with all key findings, however there was one conclusion that I found peculiarly interesting:

The four most important factors individuals consider before integration with an API are reliability (71.6%), security (71.0%), performance (70.9%), and documentation (70.3%).

Whereas these findings make a lot of sense overall and are consistent with my own personal experience, I am a bit surprised that two key factors didn't make it to the top. These two are API breath of functionality and API commercials. There could be multiple reasons on why these didn't make it to the top however I rather talk about why I personally feel these are crucial factors:

API breath of functionality is basically nothing but the business capabilities offered by an API. At the end of the day, I will not use an API that doesn't offer the business capabilities required to satisfy my functional needs. It may be the best documented API in the world, but if it doesn't do what I need it to do, then I won't use it.  Yes, it relates to documentation of course, but different nonetheless.

API commercials, things like API payment model (e.g. pay per call, freemium, subscription tiers, transaction fee, etc), T's and C's and SLA's that although may not be of too much interest e.g. to API designers and developers, they will certainly be of interest for parties involved in consuming public APIs and having to pay.

For example, if am building a solution that requires Forex Exchange, instead of building my own forex exchange capability from scratch, I may adopt a public API that does this for me so I can realise this capability quicker and go to market quicker. However there are several forex exchange APIs out there and therefore I will likely pick the one that best satisfies my needs, at the best price and best commercial terms (and yes, "best" is subjective and depends in the context, but I hope you get my point).

The Rise of API-First

Results are interesting indeed and also consistent with my own experiences. Let me elaborate why:

There are several use cases for APIs out there, however one I've recently seen a lot is the auto-generation of REST APIs based on legacy interfaces like SOAP. So without judging whether this is the right thing to do or not (as this really depends on many factors), fact remains that for people involved in this type of implementation, API-first won't make a lot of sense.

To be clear, I am a huge fan of API first, and I personally find API blueprint to be an excellent notation for defining an API especially when working together with functional analysts who are less technical. Regardless of whether or not API Blueprint is a popular notation, I find it to be very intuitive and human friendly and even if the exposed API spec might not end up being API blueprint but OAS 2 or 3 (meaning there is an additional overhead to generate or convert to OAS), the productivity gained in early stages of the API design, are IMHO, worth doing. I also know many that agree with me on this.

Who Works with APIs

Not surprised at all that full-stack developers appear as the #1 most popular role working with APIs. In fact, this is the reason why in the past I've targeted many developer conferences such as Devoxx, Oracle Code, Java2Days, to name a few, to talk about API design and implementation rather than just targeting API specific conferences where audience already knows a lot about APIs (though I like these too!).

Furthermore, I think the results reflect the fact that modern software development (e.g. based on MVVM patterns or related) require APIs full stop. And now that I work for a software company developing SaaS products, I can say this with even more confidence.... APIs are crucial for any SaaS product, yes I know, not news! :)

Having said that, I think there is one key role that is not listed, which I think is also crucial and worthy perhaps of including in subsequent surveys. And that is the API Product Owner / API Product Manager. As APIs become commercial products on their own right, this will (or should) result in the organisation making certain individuals responsible and accountable for the success of this (API) product. My team and I in Oracle Hospitality are such example. But we're not alone. I know of many organisations that have such roles formalised as well. I think it would be interesting to see how many organisations have actually formalised this role as for me personally it means that such organisations are ahead in their API business strategies.

A Day, Week, or Year in the Life

Related to my previous answer, I think this question highly depends on the role of person interviewed in the sense that yes, engineers will of course spend majority of their time writing code (and if that's not the case I would be worried and they would be bored) however API product managers/owners will also spend most of their time working with API design, API commercials, API marketing, API community support, API/Platform feature analysis, to name just a few.

API Strategies

The results of the first section, Internal vs partner vs external, I feel highly depends on the actual industry. Not questioning the results overall, but I think results can vary depending on the industry. In Hospitality for example, I would say that Open APIs are on a huge rise given the nature of the business and actors involved e.g. to be able to book a hotel room in or expedia, these two OTAs first need to get hold of availability, room inventory and price data.... which in most of the cases is supplied via APIs -perhaps not public, but partner (though partner could also be public IMHO if any partner can just openly subscribe and start using the API without prior approvals or what not). There are many other examples in the Hospitality industry btw am just not expanding further.

Another remark I have about this section is related to what I said earlier regarding Key Findings. I personally feel that API breath of functionality and API Commercials are crucial aspects for any API Product Owner having to deal with a commercial API product -in addition to the ones already listed that is. For example, looking at the survey results, Pricing/Uptime/Performance are all areas that have got to be covered somehow in the API SLA's/Ts & C's. For example, if am paying to consume API X, then I expect the API to behave in accordance to the contract, which could imply response times of not more than e.g. 1s, and 99.98% uptime.

Executing on APIs

I couldn't agree more that lack of time is the #1 obstacle in producing what I like to call a good API (though I admit what a 'good API'  actually is, is probably a topic for another blog post!). In my experience there are multiple factors contributing to this, but I think a key one is competing internal priorities. At the end of the day, API management as a discipline remains a relatively new discipline especially to the business. So when business compare priorities is important to have someone, ideally an API Product Managers, ensuring that API related priorities are well understood in business terms. If such role doesn't exist then of course this is far more challenging accomplish and API priorities will likely lose against other priorities.

Reading further, didn't quite understand the results of the Collaborating on APIs section... For example a shared URL could point to API Artifacts in GitHub/Gitlab/Bitbucket or API documentation in a portal. Having said that, it is interesting to see that sharing artifacts published in source repositories e.g. GitHub, are more popular than API documentation in a portal. Although not by far. For me this is an interesting take away as I conclude that this could means: publish API in both places and let your audience pick which one to use....

On the API Change Management section, as I said before in Twitter, I favour a versionless API approach such as the one described in this article by Z. That said, what the survey results reflect in my view is practitioners rely on both source control and API versioning when dealing with API change management. Using Git makes it a lot easier to track change and compare new/old specs for example (or even adopt GitFlows to add some governance, e.g. PR steps, in the change control process). So I would say Git is perhaps more useful at design-time. Whereas API versioning has a greater impact at runtime as it can directly impacts consuming applications. But again, I my opinion, both are required.

I can really really relate to the results of the API Documentation survey. And I can summarise my interpretation of the results with the following quote:

"Perfect is the Enemy of the Good" - Voltaire

This survey section when combined with the top result of the next one, Improving API documentation, which is 'better examples', one can easily conclude what should be the priority when documenting APIs.

And last but not least the Meeting expectations—or not section, which for me is more of an implementation concern, is very clear to me that performance related concerns are a top concern, which in the case of public APIs, has a direct relation to API commercials hence my earlier point.

Tooling for APIs and Development

The results of the API tools survey also make sense to me (Postman  being the most popular tool for PAI development). My only comment here would be that even though I work for Oracle and I am indeed a big fan of Oracle Apiary (ok there is a bit of bias here!),  I am in fact also a big fan of Postman and we actually use both in a complementary way. I would say that Postman is a very good complement to many of the tools also listed in the section and wouldn't be surprised if many organisations using are also complementing their API Platform capabilities with Postman.

I also find the result of the Platform vs separate tools section quite interesting. In my years as a consultant, this was indeed a very popular recommended approach and it resonated well with customers as it delivers a good balance of cost/benefits.

Skipping some of the other sections and jumping into DevOps tooling, not necessarily surprised that Jenkins is the most popular tool but just wonder how much of this is just because Jenkins has been out there for a while and is the tool most people are familiar with. I am personally interested in seeing how the popularity of CICD pipelines capabilities now embedded in source repositories like GitHub or Gitlab increase in popularity. For example, GitHub actions is a fairly new capability and is already very popular according to the survey result.... so reading between the lines....

In the Deploying APIs I would say something related to my previous point, giving the increased popularity of Infrastructure as Code as an approach, not surprised that CICD pipelines is the most popular way to deploy APIs. In fact, I would encourage to do it this way if not done already. I would go beyond and say that an API Platform that does not offer one way or another the ability to deploy APIs via pipeline, is in my opinion, missing a key capability.

API Technologies

The Architectural style section, perhaps the most controversial question in communities of API practitioners (and one that always reminds me of the VHS vs Betamax discussion of back in the days) I kinda feel that the survey question was perhaps too broad and perhaps not comparing apples with apples.

This is just my opinion of course, but I would have for example split sync APIs with Async APIs architectural styles. For example there is a reason many call Webhooks are often referred to as RESTHooks. In GraphQL for example, many popular implementations of GraphQL Subscriptions and LiveQueries actually make use of WebSocket as transport protocol (as GraphQL doesn't impose a transport).

Nevertheless what I conclude is that REST remains the most popular architectural style implemented today for synchronous APIs, and WebHooks for Async. With GraphQL and WebSocket increasing in popularity rapidly.

Going further, in the case of Sync APIs, I think the results resemble to an extend what one can find in tools like Stack Overflow Trends (which in my view is one of the best tools to understand tech trends based on actual development adoption). See for example:


As it can be seen, yes REST remains more popular but trend is downwards as opposed to GraphQL where trend is upwards. And the gap is closing.

In the case of Async APIs however, Stack Overflow tells a different story. But is a tricky one because Webhook is not a transport protocol but rather an Architectural approach, whereas WebSocket is a transport protocol. So perhaps a bit of an apples/bananas comparison...


I recommend my Event-driven API Strategies: from WebHooks to GraphQL Subscriptions recording from The 2019 Nordic APIs Platform Summit as I discussed this topic in far more detail. Below a diagram you may find useful from that same presentation.

Regarding the API Specifications survey result, again I personally wouldn't have mixed data structure specs (e.g. JSON Schema, Avro, Protobuff, etc) with API definition specifications such as Swagger, Open API 3.0, GraphQL SchemaAPI Blueprint. Reason being is that I feel it can confuse people that aren't fully familiar with the concerns addressed by each of these specs. Having said that, I feel results make sense,  OAS 2.0 is by far the most popular protocol out there with both OAS 3 and GraphQL catching up fast (and I wished API Blueprint as well at least just for early stages of API design).

The Future of APIs

Ok, now I may sound a bit like a broken record, however I feel that in the Future technologies survey results, again I wouldn't have mixed API implementation technologies with API Architecture Style related technologies. I say this because GraphQL, HTTP/2, Event-driven APIs can all be implemented as microservices running in Kubernetes and all with Service Meshes' like Istio. In fact this is exactly the way I've done it many times in the past, and by no means am alone in doing it this way....

But have to agree, Microservices Architecture represents in my opinion, the best Architectural approach to implement APIs. In fact in my book Enterprise API Management I have many pages and dedicated sections covering the many capabilities and patterns related to implementing APIs based on a microservices architecture.

Having said all that, the survey results of this section are really interesting in that one can also interpret that Microservices with GraphQL are perhaps the most popular combination... I would've thought that HTTP/2 and gRPC were perhaps more popular amongst microservices practitioners, but hey, this is exactly one of the reasons I found this survey really insightful.

APIs and Business Initiatives

One illustration came to mind when reading the results of this section and I'll say no more. I think we're all a bit over the top with COVID related discussions:

Learning about APIs

The results of this last section were in fact another eye opener for me. And although it shouldn't have come as a surprise that "learning on the job from colleagues" was the most popular way to learn about APIs (after all, a lot of my time goes in sharing knowledge and learning from others), it never occured to me that it was even more important than learning from documentation (second most rated entry). So I am pleased that I know this now. I will almost immediately put this into practice by emphasizing to my team and peers the importance of sharing knowledge with our colleagues -of course in complement to proper documentation!!

Thanks again  Postman and the API Evangelist for this great report, and hope you find my comments of value for future surveys :)

Wednesday 27 February 2019

A brief look at the evolution of interface protocols leading to modern APIs

Application interfaces are as old as the origins of distributed computing and can be traced back to the late 1960's when the first request-response style protocols were conceived.  For example, according to this research, it wasn't until the late 1980's when the first popular release of RPC (described below) was introduced by SUN Microsystems (later acquired by Oracle), that internet-based interface protocols gained wide popularity and adoption.

This is perhaps why the term Application Programming Interface (API) even today can often result in ambiguity depending on who you ask and in what context. This is probably because of the fact that historically the term API has been used to (and to a degree continues to) describe all sorts of interfaces well beyond just web APIs (e.g. REST).

This article therefore attempts to demystify (to an extend) the origins of modern web-based APIs. This is done by listing and describing in chronological order (as illustrated below) the different interface protocols and standards that in my view have had major influence to modern web-based APIs as we know them today (e.g. SOAP/WSDL based web services, REST, GraphQL, gRPC to name a few).

This article is part the research done for my coming book Enterprise API Management where I deep-dive into the 3 most trendy API architectural styles according to the followig Google trends.

Note that some of the texts in the following section are not mine but extracts from the referenced articles. Please do let me know if I missed any reference! Thanks.


Open Network Computing (ONC) Remote Procedure Call (RPC) is a remote procedure call system originally developed by Sun Microsystems in the 1980s as part of their Network File System project, sometimes referred to as Sun RPC.

A remote procedure call is when a computer program causes a procedure (subroutine) to execute in a different address space (commonly on another computer on a shared network), which is coded as if it were a normal (local) procedure call, without the programmer explicitly coding the details for the remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote.This is a form of client–server interaction (caller is client, executor is server), typically implemented via a request–response message-passing system. 

In the object-oriented programming paradigm, RPC calls are represented by remote method invocation (RMI). 

The RPC model implies a level of location transparency, namely that calling procedures is largely the same whether it is local or remote, but usually they are not identical, so local calls can be distinguished from remote calls. Remote calls are usually orders of magnitude slower and less reliable than local calls, so distinguishing them is important.

Note that theoretical proposals of remote procedure calls as the model of network operations date to the 1970s, and practical implementations date to the early 1980s. Bruce Jay Nelson is generally credited with coining the term "remote procedure call" in 1981. Thought the idea of treating network operations as remote procedure calls goes back at least to the 1970s in early ARPANET documents. The first popular implementation of RPC on Unix was Sun's RPC.


Interface definition
External Data Representation (XDR) / RPC language.

Serialised data based on XDR.

Transport protocol
ONC then delivers the XDR payload using either UDP or TCP.

First released
April 1988.


The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) and was designed to facilitate the communication of systems that are deployed on diverse platforms.

CORBA enables collaboration between systems on different operating systems, programming languages, and computing hardware. CORBA uses an object-oriented model although the systems that use the CORBA do not have to be object-oriented. CORBA is an example of the distributed object paradigm.

Interface definition
CORBA uses an interface definition language (IDL) to specify the interfaces that objects present to the outer world. CORBA then specifies a mapping from IDL to a specific implementation language like C++ or Java. 

Internet Inter-Orb Protocol (IIOB).

Transport protocol
TCP/IP and later HTTP (since 2007 apparently with HTIOP).

First released
Version 1.0 was released in October 1991.



The Distributed Computing Environment (DCE) RPC was an RPC system commissioned by Open Software Foundation (OSF), a non-profit organisation originally consisting Apollo Computer, Groupe Bull, Digital Equipment Corporation, Hewlett-Packard, IBM, Nixdorf Computer, and Siemens AG, sometimes referred to as the "Gang of Seven". In February 1996 Open Software Foundation merged with X/Open to become The Open GroupThe OSF was intended to be a joint development effort mostly in response to a perceived threat of "merged UNIX system" efforts by AT&T Corporation and Sun Microsystems

In DCE RPC, the client and server stubs are created by compiling a description of the remote interface (interface definition file) with the DCE Interface Definition Language (IDL) compiler. The client application, the client stub, and one instance of the RPC runtime library all execute in the caller machine; the server application, the server stub, and another instance of the RPC runtime library execute in the called (server) machine.


Interface definition
By making use of interface definition files (IDF) based on the The Interface Definition Language (IDL).

The DCE RPC protocol specifies that inputs and outputs be passed in octet streams. Whereas the IDL is to provide syntax for describing these structured data types and values. The Network Data Representation (NDR) specification of the protocol is responsible to provide a mapping of IDL data types onto octet streams. NDR defines primitive data types, constructed data types and representations for these types in an octet stream.

Transport protocol
DCE/RPC can run atop a number of protocols, including:

- TCP: Typically, connection oriented DCE/RPC uses TCP as its transport protocol. The well known TCP port for DCE/RPC EPMAP is 135. This transport is called ncacn_ip_tcp.
- UDP: Typically, connectionless DCE/RPC uses UDP as its transport protocol. The well known UDP port for DCE/RPC EPMAP is 135. This transport is called ncadg_ip_udp.
- SMB: Connection oriented DCE/RPC can also use authenticated named pipes on top of SMB as its transport protocol. This transport is called ncacn_np.
- SMB2: Connection oriented DCE/RPC can also use authenticated named pipes on top of SMB2 as its transport protocol. This transport is called ncacn_np.

First released
The first release ("P312 DCE: Remote Procedure Call") dates to1993.



The Distributed Component Object Model (DCOM) Is a proprietary Microsoft technology for communication between software components on networked computers. The addition of the "D" to COM was due to extensive use of DCE/RPC (Distributed Computing Environment/Remote Procedure Calls) – more specifically Microsoft's enhanced version, known as MSRPC.

DCOM was considered a major competitor to CORBA.

Interface definition
Characteristics of an interface are defined in an interface definition (IDL) file and an optional application configuration file (ACF):

- The IDL file specifies the characteristics of the application's interfaces on the wire — that is, how - data is to be transmitted between client and server, or between COM objects.
- The ACF file specifies interface characteristics, such as binding handles, that pertain only to the local operating environment. The ACF file can also specify how to marshal and transmit a complex data structure in a machine-independent form.
- The IDL and ACF files are scripts written in Microsoft Interface Definition Language (MIDL), which is the Microsoft implementation and extension of the OSF-DCE interface definition language (IDL).

DCOM objects (Microsoft proprietary).

Transport protocol
TCP/IP and later HTTP (since 2003).

First released
OLE 1.0, released in 1990, was an evolution of the original Dynamic Data Exchange (DDE) concept that Microsoft developed for earlier versions of Windows. OLE 1.0 later evolved to become an architecture for software components known as the Component Object Model (COM), which later in early 1996 became DCOM. 



The Extensible Markup Language (XML) Remote Procedure Call (RPC) is a protocol which uses XML to encode its calls and HTTP as a transport mechanism. In XML-RPC, a client performs an RPC by sending an HTTP request to a server that implements XML-RPC and receives the HTTP response. A call can have multiple parameters and one result. The protocol defines a few data types for the parameters and result. Some of these data types are complex, i.e. nested. For example, you can have a parameter that is an array of five integers.

Interface definition
No explicit interface definition language. The protocol defines a set of header and payload requirements which the implementation (e.g. in Java) most comply with.


Transport protocol

First released
The XML-RPC protocol was created in 1998 by Dave Winer of UserLand Software and Microsoft, with Microsoft seeing the protocol as an essential part of scaling up its efforts in business-to-business e-commerce. As new functionality was introduced, the standard evolved into what is now SOAP.



Enterprise Java Beans (EJB) is a server-side software component that encapsulates business logic of an application. An EJB web container provides a runtime environment for web related software components, including computer security, Java servlet lifecycle management, transaction processing, and other web services. The EJB specification is a subset of the Java EE specification. EJBs are/were typically used when building highly scalable and robust enterprise level applications that can be deployed on Jakarta EE (former J2EE) compliant Application Server such as JBOSS, Web Logic etc.

Interface definition
Java remote interface (extending javax.ejb.EJBObject) declaring the methods that a client can invoke.

Originally as serialised Java objects (e.g. DTO), but later releases also support XML and JSON over HTTP.

Transport protocol
EJB originally specified Java Remote Method Invocation (RMI) as the transport protocol, but later releases also support HTTP.

First release
The EJB specification was originally developed in 1997 by IBM and later adopted by Sun Microsystems (EJB 1.0 and 1.1) in 1999 and enhanced under the Java Community Process as JSR 19 (EJB 2.0), JSR 153 (EJB 2.1), JSR 220 (EJB 3.0), JSR 318 (EJB 3.1) and JSR 345 (EJB 3.2).



Representational State Transfer (REST) is a software architectural style that defines a set of constraints to be used for creating Web services. Such constraints restrict the ways that the server can process and respond to client requests so that, by operating within these constraints, the system gains desirable non-functional properties, such as performance, scalability, simplicity, modifiability, visibility, portability, and reliability. If a system violates any of the required constraints, it cannot be considered RESTful.

These constraints are (from Roy's dissertation):

Client-server:  separation of concerns is the principle behind the client-server constraints. By separating the user interface concerns from the data storage concerns, we improve the portability of the user interface across multiple platforms and improve scalability by simplifying the server components. Perhaps most significant to the Web, however, is that the separation allows the components to evolve independently, thus supporting the Internet-scale requirement of multiple organisational domains
Stateless: communication must be stateless in nature such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.
Cache: In order to improve network efficiency, we add cache constraints to form the client-cache-stateless-server style. Cache constraints require that the data within a response to a request be implicitly or explicitly labeled as cacheable or non-cacheable. If a response is cacheable, then a client cache is given the right to reuse that response data for later, equivalent requests
Uniform interface: The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components. By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent evolvability. REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state (HATEOAS)
Layered system: In order to further improve behaviour for Internet-scale requirements the layered system style allows an architecture to be composed of hierarchical layers by constraining component behaviour such that each component cannot "see" beyond the immediate layer with which they are interacting. y restricting knowledge of the system to a single layer, we place a bound on the overall system complexity and promote substrate independence. Layers can be used to encapsulate legacy services and to protect new services from legacy clients, simplifying components by moving infrequently used functionality to a shared intermediary. Intermediaries can also be used to improve system scalability by enabling load balancing of services across multiple networks and processors.
Code-on-demand: REST allows client functionality to be extended by downloading and executing code in the form of applets or scripts. This simplifies clients by reducing the number of features required to be pre-implemented. Allowing features to be downloaded after deployment improves system extensibility. However, it also reduces visibility, and thus is only an optional constraint within REST.

REST was first introduced in the year 2000 as part of Roy Fielding's PhD dissertation titled "Architectural Styles and the Design of Network-based Software Architectures".

REST was first introduced in the year 2000 as part of Roy Fielding's PhD dissertation titled "Architectural Styles and the Design of Network-based Software Architectures".  Although the first (or at least the first publicly known) REST API was launched by eBAY the same year, the adoption of REST over alternatives (such as SOAP/WSDL Web Services) really only gained traction towards the end of 2004 when Flickr first launched its first publicly available REST API and shortly after Facebook and Twitter followed also publishing their own public REST APIs.

Note that the first (or at least the first publicly known) REST API launched by eBAY the same year as Roy's dissertation.


Interface definition
Many open-sourced interface definition languages exists for REST APIs -although none as part of the original Roy Fielding's REST PhD dissertation.

Some of the most popular ones are REST are:

Swagger -later renamed to OpenAPI Specification (OAS) being the latest version 3.0 (with Swagger yous becoming the name to refer to a set of toolsets for adopting OAS).
API blueprint created by the founders of (and now part of Oracle) which is a platform that offers robust REST API design and testing capabilities.
RESTful API Modeling Language (RAML) created by MuleSoft.

REST does not specify any specific payload format, however majority of REST APIs make use of the JavaScript Object Notation (JSON) which is a lightweight data-interchange format easy for humans to read and write. XML payloads are also not uncommon in REST APIs.

Transport protocol:

First release:
REST was defined by Roy Fielding in his 2000 PhD dissertation "Architectural Styles and the Design of Network-based Software Architectures" at UC Irvine. He developed the REST architectural style in parallel with HTTP 1.1 of 1996–1999, based on the existing design of HTTP 1.0 of 1996.

In terms of interface definition languages:

- Swagger was first released in 2011. Its name switched to OAS in 2016..
- In 2017, OAS 3.0 was released.
- API Blueprint and RAML were both released in 2013.


SOAP/WSDL & Web Services

The Simple Object Access Protocol (SOAP) is an XML based protocol for exchange of information in a decentralised, distributed environment.  SOAP was designed as an object-access protocol in 1998 for Microsoft.

It consists of three parts:

- An envelope that defines a framework for describing what is in a message and how to process it
- A set of encoding rules for expressing instances of application-defined datatypes
- A convention for representing remote procedure calls and responses.

The Web Services Description Language (WSDL) as it name suggests is an interface description language also based on XML. The main purpose of WSDL is to describe the functionality offered by a SOAP interface. WSDL 1.0 (Sept. 2000) was developed by IBM, Microsoft, and Ariba to describe Web Services for their SOAP toolkit

The combination of SOAP and WSDL as the means to define and implement open-standard based interfaces eventually became known as Web Services. The term also became official in 2004 as part of W3C's Web Services Architecture.

Its worth mentioning that Web Services became one of the core building blocks of Service Oriented Architectures (SOA).


Interface definition

XML  SOAP message containing an envelop and within the header and body.

Transport protocol:
SOAP can potentially be used in combination with a variety of other protocols; however, the only bindings defined in this document describe how to use SOAP in combination with HTTP and HTTP Extension Framework.

First release:
Both SOAP and WSDL versions 1.2 became an official W3C recommendation in June 2003.
In 2004 the term Web Services became official as part of W3C's Web Service Architecture recommendation.



The Open Data Protocol (OData) is a REST-based protocol originally designed by Microsoft but later becoming ISO/IEC approved and an OASIS standard.  The main objective of OData is to standardise the way in which basic data access operations can be made available via REST.

OData It’s built on top of HTTP and uses URIs to address and access data feed resource (in JSON format). The protocol is based on AtomPub and extends it by adding metadata to describe the data source and a standard means of querying the underlying data.

OData was subject to criticism in 2013 when Netflix abandoned the use of the protocol.

"A more technical concern with OData is that it encourages poor development and API practices by providing a black-box framework to enforce a generic repository pattern. This can be regarded as an anti-pattern as it provides both a weak contract and leaky abstraction. An API should be designed with a specific intent in mind rather than providing a generic set of methods that are automatically generated. OData tends to give rise to very noisy method outputs with a metadata approach that feels more like a WSDL than REST. This doesn’t exactly foster simplicity and usability".

In spite of this, large software companies like Microsoft and SAP still back the protocol, although industry wide OData seems to have declined in popularity.


Interface definition
OData services are described in terms of an Entity Model. The Common Schema Definition Language (CSDL) defines a representation of the entity model exposed by an OData service using (in version 4) JSON.


Transport protocol:

First release:
In May, 2012, companies including Citrix, IBM, Microsoft, Progress Software, SAP AG, and WSO2 submitted a proposal to OASIS to begin the formal standardization process for OData. Many Microsoft products and services support OData, including Microsoft SharePoint, Microsoft SQL Server Reporting Services, and Microsoft Dynamics CRM. OData V4.0 was officially approved as a new OASIS standard in March, 2014.



The Graph Query Language (GraphQL) is a query language for APIs and a runtime for fulfilling those queries with your existing data. It was created by Facebook in 2012 to get around a common constraints in the REST approach when fetching data.

GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

GraphQL isn't tied to any specific database or storage engine and is instead backed by your existing code and data. A GraphQL service is created by defining types and fields on those types, then providing functions for each field on each type.

GraphQL is not a programming language capable of arbitrary computation, but is instead a language used to query application servers that have capabilities defined in this specification. GraphQL does not mandate a particular programming language or storage system for application servers that implement it. Instead, application servers take their capabilities and map them to a uniform language, type system, and philosophy that GraphQL encodes. This provides a unified interface friendly to product development and a powerful platform for tool-building.

GraphQL has a number of design principles:

- Hierarchical: Most product development today involves the creation and manipulation of view hierarchies. To achieve congruence with the structure of these applications, a GraphQL query itself is structured hierarchically. The query is shaped just like the data it returns. It is a natural way for clients to describe data requirements.
- Product‐centric: GraphQL is unapologetically driven by the requirements of views and the front‐end engineers that write them. GraphQL starts with their way of thinking and requirements and build the language and runtime necessary to enable that.
- Strong‐typing: Every GraphQL server defines an application‐specific type system. Queries are executed within the context of that type system. Given a query, tools can ensure that the query is both syntactically correct and valid within the GraphQL type system before execution, i.e. at development time, and the server can make certain guarantees about the shape and nature of the response.
- Client‐specified queries: Through its type system, a GraphQL server publishes the capabilities that its clients are allowed to consume. It is the client that is responsible for specifying exactly how it will consume those published capabilities. These queries are specified at field‐level granularity. In the majority of client‐server applications written without GraphQL, the server determines the data returned in its various scripted endpoints. A GraphQL query, on the other hand, returns exactly what a client asks for and no more.
- Introspective: GraphQL is introspective. A GraphQL server’s type system must be queryable by the GraphQL language itself, as will be described in this specification. GraphQL introspection serves as a powerful platform for building common tools and client software libraries.

Interface definition
The GraphQL Schema Definition Language (SDL).


Transport protocol:

First release:
GraphQL was publicly released in 2015. The GraphQL Schema Definition Language (SDL) added to spec in Feb’18.



gRPC is defined by Google as a modern Remote Procedure Call (RPC) framework that can run in any environment.  gRPC in principle enables client and server applications to communicate transparently, and makes it easier to build connected systems.

gRPC was designed based on the following principles:

- Services not objects, messages not references: promote the microservices design philosophy of coarse-grained message exchange between systems while avoiding the pitfalls of distributed objects and the fallacies of ignoring the network.
- Coverage & simplicity: the stack should be available on every popular development platform and easy for someone to build for their platform of choice. It should be viable on CPU & memory limited devices.
- Free & open: make the fundamental features free for all to use. Release all artefacts as open-source efforts with licensing that should facilitate and not impede adoption.
- Interoperability & reach: the wire-protocol must be capable of surviving traversal over common internet infrastructure.
- General purpose & performant: the stack should be applicable to a broad class of use-cases while sacrificing little in performance when compared to a use-case specific stack.
- Layered: key facets of the stack must be able to evolve independently. A revision to the wire-format should not disrupt application layer bindings.
- Payload agnostic: different services need to use different message types and encodings such as protocol buffers, JSON, XML, and Thrift; the protocol and implementations must allow for this. Similarly the need for payload compression varies by use-case and payload type: the protocol should allow for pluggable compression mechanisms.
- Streaming: storage systems rely on streaming and flow-control to express large data-sets. Other services, like voice-to-text or stock-tickers, rely on streaming to represent temporally related message sequences.
- Blocking & non-blocking: support both asynchronous and synchronous processing of the sequence of messages exchanged by a client and server. This is critical for scaling and handling streams on certain platforms.
- Cancellation & timeout: operations can be expensive and long-lived - cancellation allows servers to reclaim resources when clients are well-behaved. When a causal-chain of work is tracked, cancellation can cascade. A client may indicate a timeout for a call, which allows services to tune their behaviour to the needs of the client.
- Lameducking: servers must be allowed to gracefully shut-down by rejecting new requests while continuing to process in-flight ones.
- Flow-control: computing power and network capacity are often unbalanced between client & server. Flow control allows for better buffer management as well as providing protection from DOS by an overly active peer.
- Pluggable: A wire protocol is only part of a functioning API infrastructure. Large distributed systems need security, health-checking, load-balancing and failover, monitoring, tracing, logging, and so on. Implementations should provide extensions points to allow for plugging in these features and, where useful, default implementations.
- Extensions as APIs: extensions that require collaboration among services should favour using APIs rather than protocol extensions where possible. Extensions of this type could include health-checking, service introspection, load monitoring, and load-balancing assignment.
- Metadata exchange: common cross-cutting concerns like authentication or tracing rely on the exchange of data that is not part of the declared interface of a service. Deployments rely on their ability to evolve these features at a different rate to the individual APIs exposed by services.
- Standardised status codes: clients typically respond to errors returned by API calls in a limited number of ways. The status code namespace should be constrained to make these error handling decisions clearer. If richer domain-specific status is needed the metadata exchange mechanism can be used to provide that.

Interface definition
gRPC can use protocol buffers as both its Interface Definition Language (IDL) and as its underlying message interchange format.  Protocol buffers is Google’s mature open source mechanism for serialising structured data.

By default gRPC uses protocol buffers (although it can be used with other data formats such as JSON).

Transport protocol:

First release:
Google released gRPC as open source in 2015.



RSocket is a binary application (level 7) protocol, originally developed by Netflix (and later by engineers from Facebook, Netifi and Pivotal amongst others), for use on byte stream transports such as TCP, WebSockets, and Aeron. The motivation behind its development was to replace HTTP, which was considered inefficient for many tasks such as microservices communication, with a protocol that has less overhead.

RSocket is a bi-directional, multiplexed, message-based, binary protocol based on reactive streams back pressure. It enables the following symmetric interaction models via async message passing over a single connection:

- request/response (stream of 1).
- request/stream (finite stream of many).
- fire-and-forget (no response).
- channel (bi-directional streams).

It also supports session resumption, to allow resuming long-lived streams across different transport connections. This is particularly useful for mobile <> server communication when network connections drop, switch, and reconnect frequently.

Some of the key motivations behind rSocket include:

- support for interaction models beyond request/response such as streaming responses and push.
- application-level flow control semantics (async pull/push of bounded batch sizes) across network boundaries.
- binary, multiplexed use of a single connection.
- support resumption of long-lived subscriptions across transport connections.
- need of an application protocol in order to use transport protocols such as WebSockets and Aeron.
The protocol is specifically designed to work well with Reactive-style applications, which are fundamentally non-blocking and often (but not always) paired with asynchronous behaviour. The use of Reactive back pressure, the idea that a publisher cannot send data to a subscriber until that subscriber has indicated that it is ready, is a key differentiator from "async".

Interface definition
Depends on the implementation. For example RSocket RPC uses Google's protocol buffer v3 as its interface definition language.

RSocket provides mechanisms for applications to distinguish payload into two types. Data and Metadata. The distinction between the types in an application is left to the application.

The following are features of Data and Metadata:

- Metadata can be encoded differently than Data.
- Metadata can be "attached" (i.e. correlated) with the following entities:
- Connection via Metadata Push and Stream ID of 0
- Individual Request or Payload (upstream or downstream)
Transport protocol:
The RSocket protocol uses a lower level transport protocol to carry RSocket frames. A transport protocol MUST provide the following:

1- Unicast reliable delivery.
2- Connection-oriented and preservation of frame ordering. Frame A sent before Frame B MUST arrive in source order. i.e. if Frame A is sent by the same source as Frame B, then Frame A will always arrive before Frame B. No assumptions about ordering across sources is assumed.
3- FCS is assumed to be in use either at the transport protocol or at each MAC layer hop. But no protection against malicious corruption is assumed.

RSocket as specified here has been designed for and tested with TCP, WebSockets, Aeron and HTTP/2 streams as transport protocols.

First release:
Although originally released by Netflix in October 2015, the protocol is currently a draft for the final specifications. Current version of the protocol is 0.2 (Major Version: 0, Minor Version: 2). This is currently considered a 1.0 Release Candidate. Final testing is being done in Java and C++ implementations with goal to release 1.0 in the near future.