{"API Proxy"}

Thinking About An API Proxy To Add Link Header To Each API Response

I was learning more about using the Link header for pagination yesterday, as part of my work on the Human Services Data Specification (HSDS), and this approach to putting hypermedia links in the header got me thinking about other possibilities. Part of the reason I was considering using the Link header for pagination on this particular project was that I was looking to alter the existing schema as little as possible -- I liked that I could augment the response with links, using the header.

Another side thought I had along the way were around the possibilities for using it to augment 3rd party APIs, and APIs from an external vantage point. It wouldn't be too hard route API requests through a proxy, which could add a header with a personalized set of links tailored for each API request. If the request was looking up flights, the links could be to value add services that might influence the decision like links to hotels, events, and other activities. If you were looking up the definition of a word, the links could be to synonyms--endless possibilities.

You wouldn't have to just use it for pagination, and other common link relations, you could make it more about discovery, search, or even serendipity injected from outside sources. Anyways, I think the fact that you can augment an existing response using a header opens up a lot of possibilities for adding hypermedia behaviors to existing APIs. It might also be an interesting way to introduce existing API owners to hypermedia concepts, by showing them the value that can be added when you provided valuable links.

See The Full Blog Post


Scraping Static Docs Is Often Better Than Proxy For Generating Machine Readable API Definitions

I was looking to create an APIs.json plus OpenAPI Spec(s) for the WordPress.org API, and the Instructure Canvas Learning Management System (LMS) API. I am pulling together a toolkit to support a workshop at Davidson College in North Carolina this month, and I wanted a handful of APIs that would be relevant to students, and faculty on campus

In my experience, when it comes to documenting large APIs using OpenAPI Spec, you don't want to be hand rolling things, making auto generation essential. There are two options for accomplishing this, 1) I can use a proxy like Charles or Stoplight.io, or 2) I can write a script to scrape the publicly available HTML documentation for each API. While I do enjoy playing with mapping out APIs in Stoplight.io, allowing it do the heavy lifting of crafting each API definition, sometimes there is more relevant meta data for the API available in the API documentation.

The OpenAPI Spec, plus APIs.json files for both the WordPress and Instructure Canvas APIs took me about an hour a each, to write the script, and round off the OpenAPI Spec, making sure it was as complete as possible. Through scraping, I get description for endpoints, parameters, and sometimes I also get other detail including sample responses, enum, and response codes.

One downside of obtaining an API definition by scraping, is that I only get the surface area of an API, not the responses, and underlying data model. Sometimes this is included in documentation, but I do not always harvest this--waiting until I can get a often more correct schema, when I map out using a proxy or via HAR file. This is OK. I find the trade-off worth it. I'd rather have the more human-centered descriptions, and names of each endpoints, than the response definitions--that will come with time, and more usage of the actual APIs.

In the end, it really depends to the size of an API, and the quality of the API documentation. If it is a big API, and the documentation is well crafted, it is preferable to scrape and auto generate the definition. Once I have this, I can load it into Postman or Stoplight.io, start making API calls, and use either Stoplight's proxy, or my own solution that uses Charles Proxy, to provide the remaining schema of the responses, as well as the resulting HTTP status code(s).

I think the human touch on all APIs.json, OpenAPI Spec, and API Blueprint files will prove to be essential in streamlining interactions at every stop along the API life cycle. If you can't easily understand what an API does, and what the moving parts are, the rest won't matter, so having simple, well written titles, and descriptions for APIs that are described in each machine readable definition is well worth any extra work. Even with auto generation via scraping, or Stoplight.io, I find I still have have to give each API definitions a little extra love to make sure they are as polished as possible.

I'm thinking I will start keeping a journal of the work goes into crafting each API's definition(s). It might be something I can use down the road to further streamline the creation, and maintenance of my API definitions, and the API services I develop to support all of this.

Here is the APIs.json for the Wordpress.org API by the way:

Here is the APIs.json for the Instructure Canvas API as well:

You can see these, and some other API definitions for my workshop over at the Github repo for the project. I created a new Liquid template, that allows me to display APIs.json and OpenAPI Specs within the Jekyll site for this project. Something that I will be using to better deliver API driven content, visualizations, and other resources that help us learn about, and put APIs to work.

See The Full Blog Post


Automated Mapping Of The API Universe With Charles Proxy, Dropbox, OpenAPI Spec, And Some Custom APIs

I have been working hard for about a year now trying to craft machine readable API definitions for the leading APIs out there. I've written before about my use of Charles Proxy to generate OpenAPI Spec files, something I'm evolving over the last couple days, making it more automated, and hopefully making my mapping of the API universe much more efficient.

Hand crafting even the base API definition for any API is time consuming, which is something that swells quickly to being hours when you consider the finish work that required, so I was desperately looking how I could automate this aspect of my operations more. I have two modes when looking at an API, review mode where I'm documenting the API and its surrounding operations, with the second being about actually using the API. While I will still be reviewing APIs, my goal is to immediately begin actually using an API, where I feel most of the value is at, while also kicking off the documentation process in the same motion.

Logging All Of My Traffic Using Charles Proxy On Machine
Using Charles Proxy, I route all of my network traffic on my Macbook Pro through a single proxy which I am in control of, allowing me to log every Internet location my computer visits throughout the day. It is something I cannot leave running 100% of the time, as it breaks certificates, sets of security warnings from a number of destinations, but is something I can run about 75% of my world through--establishing a pretty interesting map of the resources I consume, and produce on each day. 

Auto Saving Charles Proxy Session Files Every 30 Minutes
While running running Charles Proxy, I have it setup to auto save a session XML every 30 minutes, giving me bite size snapshots of transaction throughout my day. I turn Charles Proxy on or off, depending on what I am doing. I selected to save as a session XML file because after looking at each format, I felt it had the information I needed, while also easily imported into my database back end. 

Leverage Dropbox Sync And API To Process Session Files
The session XML files generated by Charles Proxy get saved into my local Dropbox folder on my Macbook Pro. Dropbox does the rest, it syncs all of my session XML files to the cloud, securely stored in a single application folder. This allows me to easily generate profiles of websites and APIs, and something that passively occurs in the background while I work on specific research. The only time Dropbox will connect and sync my files, is when I have Charles Proxy off, otherwise it can't establish a secure connection.

Custom API To Process Session Files Available In Dropbox
With network traffic logged, and stored in the cloud using Dropbox, I can then access them via the Dropbox API. To handle this work, I setup an API that will check the specified Dropbox app folder, associated with its Dropbox API application access, and import any new files that it finds. Once a file has been processed, I delete it from Dropbox, dumping any personally identifiable information that may have been present--however, I am not doing banking, or other vital things with Charles Proxy on.

Custom API To Organize Transactions By Host & Media Type
I now have all my logged transactions stored in a database, and I can begin to organize them by host, and media type--something I'm sure I will evolve with time. To facilitate this process I have created a custom API that allows me to see each unique domain or sub-domain that I visit during my logging with Charles Proxy. I am mostly interested in API traffic, so I'm looking for JSON, XML, and other API related media types. I do not process any image, and many other common media types, but do log traffic to HTML sites, routing into a separate bucket which I describe below. 

Custom API To Generate OpenAPI Spec For Each Transaction
In addition to storing the primary details for each transaction I log, for each transaction with a application/json response, I auto-generate an OpenAPI Spec file, mapping out the surface area of the API endpoint. The goal is to provide a basic, machine readable definition of the transaction, so that I can group by host, and other primary details I'm tracking on. This is the portion of the process that generates the map I need for the API universe.

Custom API To Generate JSON Schema For Each Transaction
In addition to generating an OpenAPI Spec for each transaction that I track on with a application/json response, I generate a JSON Schema for the JSON returned. This allows me to map out what data is being returned, without it containing any of the actual data itself. I will do the same for any request body as well, providing a JSON Schema definition for what data is being sent as well as received within any transaction that occurs during my Charles Proxy monitoring.

Automation Of Process Using The EasyCRON Layer Of Platform
I now have four separate APIs that help me automate the logging of my network traffic, storing, processing of all transactions I record, then automatically generate an OpenAPI Spec, and JSON Schema for each API call. This provides me with a more efficient way to kick off the API documentation process, automatically generating machine readable API definitions and data schema, from the exhaust of my daily work, which includes numerous API calls, for a wide variety different reasons.

Helping Me Map Out The World Of Web APIs As The API Evangelist
The primary goal of this work is to help me map out the world of APIs, as part of my work as the API Evangelist. Using this process, all I have to do is turn on Charles Proxy, fire up my Postman, visit an API I want to map out, and start using the API. Usually within an hour, I will then have an Open API Spec for each transaction, as well as aggregated by host, along with a supporting JSON Schema for the underlying request or response data model--everything I need to map out more APIs, more efficient scaling what I do. 

Helping Me Understand The Media Types In Use Out There Today
One thing I noticed right away, was the variety of media types I was coming across. At first I locked things down to application/json, but then I realized I wanted XML, and others. So I reversed my approach and let through all media-types, and started building a blacklist for which ones I did not want to let through. Leaving this part of the process open, and requiring manual evaluation of media types is really pushing forward my awarnesss of alternative media types, and is something that was an unexpected aspect to this owrk.

Helping Me Understand The Companies I Use Daily In My Business
It is really interesting to see the list of hosts that I have generated as part of this work. Some of these companies I depend on for applications that I depend on like Tweetdeck, Github, and Dropbox, while others are companies I'm looking to learn more about as part of API Evangelist research, and storytelling. I'm guessing this understanding of the companies that I'm using daily in my work will continue to evolve significantly as I continue looking at the world through this lens. 

Helping Me Understand The Data I Exchange Daily With Companies
The host of each transaction gives me a look at the companies I transact with daily, but the JSON Schema derived from request and responses that are JSON, also giving me an interesting look at the information I'm exchanging in my daily operations, either directly with platforms I depend on, or casually with websites I visit, and the web applications I'm testing out. I have a lot of work ahead of me to actually catalog, organized and derive meaning from the schema I am generating, but at least I have them in buckets for further evaluation in the near future.

Routing Websites That I Visit Into A Separate Bucket For Tracking On
At first I was going to just ditch all GET requests that returned HTML, but instead I decided to log these transactions, keeping the host, path, and parameters in a separate media type bucket. While I won't be evaluating these domains like I do the APIs that return JSON, XML, etc, I will be keeping an eye on them. I'm feeding these URLs into my core monitoring system, and for some companies I will pull their blog RSS, Twitter handles, and Github accounts, in addition to looking for other artifacts like OpenAPI Specs, API Blueprints, Postman Collections, APIs.json, and other machine readable goodies.

Targeting Of Specific Web, Mobile, Device, And API Driven Platforms
Now that I have this new, more automated API mapping system setup, it will encourage me to target specific web mobile, devices, and API platforms. I will be routing my iPhone, and iPad through the proxy, allowing me to map out mobile applications. If I can just get to work using an API in my Postman client, or use the website or mobile app, and auto-generate a map of the APIs in use in OpenAPI Spec, and data models using JSON Schema, you are going to find me mapping a number of new platform targets in 2016. 

Ok, So What Now? What Am I Going To Do With This Mapping Info Next?
You know, I'm not sure what is next. I learned a lot from this 25 hour sprint, to better automate this process. I think I will just sit back and let it run for a week or two, and do what I regularly do. Visit the websites and developer areas of platforms that I'm keeping an eye on. I will keep using APIs to run my own operations, as well as play with as many APIs as I possibly can fit into my days. Periodically I will check it to see how my new API mapping system is working, and see if I can't answer some pressing questions I have: 

  • How much do I create vs. consume? ie. POST, PUT & PATCH over GET?
  • How often do I use my own resources vs the API resources of others?
  • Do I have any plan B or C for all resources I am using?
  • Do I agree with the terms of service for these platforms I am using?
  • Do I pay any of the services that are a regular part of my daily operations?
  • Am I in control of my account and data for these platforms & companies?

For the moment, I am just looking at establish a map of the digital surface area I touch on each day, and further scale my ability to map out unknown areas of the API wildnerness. I am curious to see how many OpenAPISpecs and JSON Schemas I can generate in a week or month now. I have no idea how I'm going to store or organize all of these maps of the API sector, but it is something I'm sure I can find a solution for using my APIs.json format

This is the type of work I really enjoy. It involves scaling what I do, better understanding what already exists out there, something that will fuel my storytelling, and is something that pushes me to code, and craft custom APIs, while also employing other open tooling, formats, and services along the way--this is API Evangelist.

See The Full Blog Post


Parsing Charles Proxy Exports To Generate Swagger Definitions, While Also Linking Them To Each Path

Making sure the Swagger files I craft possess a complete definition for its underlying data model, one that is linked to each API path, and parameters where it is put to use, is important to me, but damn it is a lot of work. As I mentioned in my last piece I'm looking at the Twitter Swagger file, and my head starts spinning thinking about how much work it will be to hand-define all of the data models that used across the almost 100 Twitter endpoints.

I quickly got to work finding a better solution--I landed on Charles Proxy. I had downloaded and installed Charles Proxy to better understand how we could map out the dark layer of the API universe, that the popular mobile applications we use depend on. When running, Charles proxies all the requests and responses my desktop apps, and browsers make on my local Macbook. I can also route my iPhone and iPad through the proxy, when I want to also record my mobile app usage. This is perfect for helping me map out the public APIs in my API Stack work!

When the Charles Proxy is running, it saves an XML summary export to my local Dropbox folder, which then gets synced to the cloud via the Dropbox API. I am now working on a script that will keep an eye on my Dropbox folder, and process any new Charles export files it finds. As I process each file, I'm cherry picking from the domains of specific companies that I'm tracking on, pulling out the request and response information I need to craft a Swagger definition. 

To generate the traffic I need, I just load up any API I'm looking to profile in Postman, and started working my way through the list of endpoints, until I've covered the entire surface area of any API. I find it is easy to generate a beginning Swagger definition, which includes the host, base uRL, endpoints, and parameters, then load it into Postman, and let Charles proxy complete the rest of the Swagger definition collection, and link each one to any path or parameter it is referenced by. I will be running checks on request details, to make sure I haven't forgotten about any endpoints, and parameters, but my goal is primarily around polishing the definition collection, with an endpoint linkage.

I will not rely on these Swagger definitions generated from the Charles proxy. I will be queuing them up in a Github repo(s), and syncing them existing, often hand-crafted Swagger definitions I'm already evolving. Hopefully this process will help me automate the profiling of popular public APIs, and enable me to crank through more APIs this summer, as part of my API Stack research.

All of this is working out well. My need to automate the defining of underlying data models, reminded me of the dark API work I was already doing with Charles Proxy--something I will spend more time on this summer. I am looking to generate a Swagger definition for each of the desktop apps I use on my MacBook, and the mobile apps I use on my iDevices--stay tuned!

See The Full Blog Post


Proxy The Public API You Are Using With APITools And Send Me The Swagger It Generates, Please...

APITools is a simple, open source, API middleware that allows you to “track, transform and analyze the traffic between your app and the APIs”. With just a few clicks you can proxy any API you use, and when you make calls through the proxy, you get a bunch of valuable information in return.

One thing APITools does, that is extremely valuable to me, is it generates Swagger definitions, mapping out the surface area of an API, with each call I make. These API definitions have a wide variety of uses for me, ranging from better understanding the API designs of popular services, to providing API search services through open API search engines like APIs.io.

If you are regularly developing against a public API, can you take a moment to swap the baseURL with one created in APITools, make all of your calls to the API, then send me a copy of the Swagger definition? I would sure appreciate the help in creating Swagger definitions for all of the popular APIs available today. Don’t worry all your work is openly available on both APIs.io, and API Stack for re-use and forking by anyone.

See The Full Blog Post


The Quickest Way To Proxy, Secure, Rate Limit, and Monitor My APIs

As I am designing my APIs, one of the first things I decide is whether or not I will be making this public. If its a simple enough resource, and doesn't put too much load on my servers, I will usually make it publicly available. However if an API has write capabilities, could potentially put a heavy load on my servers, or just posses some private resource that I want to keep private, I will secure the API.

I use 3Scale for my API management infrastructure--I have since 2011, long before I ever started working with them on projects, and organizing @APIStrat. When it comes time to secure any of my APIs, I have a default snippet of code that I wrap each API, validating the application keys, and recording their activity--which 3Scale calls the plugin integration approach.

This time around, I logged into my 3Scale admin area, went to my API integration area, and saw the setup for the 3Scale Cloud API proxy that they are calling APICast. I can't help but notice the simple setup of the proxy--I give it a private base URL for my API, it gives me a public base URL back, and then I can configure the proxy rules, setting the rate limits for each of my API resources.

 

That is it. I can set up my APIs in a sandbox environment, then take it live when I am ready. It is the quickest way to secure my APIs I've seen, allowing me to instantly lock down my APIs, and require anyone who uses it to register for a key, and then I am able to track on how it is being put to use—no server configuration or setup needed.

This easy setup, bundled with the fact you can setup 3Scale for free, and get up to 50K a day in API calls, makes it the perfect environment for figuring out your API surface area. Then when ready, you can pay for heavier volume, and take advantage of the other advanced features available via 3Scale. I'm still using the plugin feature for 90% of my endpoints, but some I will be using APICast to quickly stand-up, secure, and monitor some of my APIs. I will publish a how-to after I finish setting this one up.

Disclosure: 3Scale is an API Evangelist partner.

See The Full Blog Post


Building Blocks Of API Deployment

As I continue my research the world of API deployment, I'm trying to distill the services, and tooling I come across, down into what I consider to be a common set of building blocks. My goal with identifying API deployment building blocks is to provide a simple list of what the moving parts are, that enable API providers to successfully deploy their services.

Some of these building blocks overlap with other core areas of my research like design, and management, but I hope this list captures the basic building blocks of what anyone needs to know, to be able to follow the world of API deployment. While this post is meant for a wider audience, beyond just developers, I think it provides a good reminder for developers as well, and can help things come into focus. (I know it does for me!)

Also there is some overlap between some of these building blocks, like API Gateway and API Proxy, both doing very similiar things, but labeled differently. Identifying building blocks for me, can be very difficult, and I'm constantly shifting definitions around, until I find a comfortable fit--so some of these will evolve, especially with the speed at which things are moving in 2014.

CSV to API - Text files that contain comma separate values or CSVs, is one of the quickest ways to convert existing data to an API. Each row of a CSV can be imported and converted to a record in a database, and easily generate a RESTful interface that represents the data stored in the CSV. CSV to API can be very messy depending on the quality of the data in the CSV, but can be a quick way to breathe new life into old catalogs of data lying around on servers or even desktops. The easiest way to deal with CSV is to import directly into database, than generate API from database, but the process can be done at time of API creation.
Database to API - Database to API is definitely the quickest way to generate an API. If you have valuable data, generally in 2013, it will reside in a Microsoft, MySQL, PostgreSQL or other common database platform. Connecting to a database and generate a CRUD, or create, read, updated and delete API on an existing data make sense for a lot of reason. This is the quickest way to open up product catalogs, public directories, blogs, calendars or any other commonly stored data. APIs are rapidly replace database connections, when bundled with common API management techniques, APIs can allow for much more versatile and secure access that can be made public and shared outside the firewall.
Framework - There is no reason to hand-craft an API from scratch these days. There are numerous frameworks out their that are designed for rapidly deploying web APIs. Deploying APIs using a framework is only an option when you have the necessary technical and developer talent to be able to understand the setup of environment and follow the design patterns of each framework. When it comes to planning the deployment of an API using a framework, it is best to select one of the common frameworks written in the preferred language of the available developer and IT resources. Frameworks can be used to deploy data APIs from CSVs and databases, content from documents or custom code resources that allow access to more complex objects.
API Gateway - API gateways are enterprise quality solutions that are designed to expose API resources. Gateways are meant to provide a complete solution for exposing internal systems and connecting with external platforms. API gateways are often used to proxy and mediate existing API deployments, but may also provide solutions for connecting to other internal systems like databases, FTP, messaging and other common resources. Many public APIs are exposed using frameworks, most enterprise APIs are deployed via API gateways--supporting much larger ideployments.
API Proxy - API proxy are common place for taking an existing API interface, running it through an intermediary which allows for translations, transformations and other added services on top of API. An API proxy does not deploy an API, but can take existing resources like SOAP, XML-RPC and transform into more common RESTful APIs with JSON formats. Proxies provide other functions such as service composition, rate limiting, filtering and securing of API endpoints. API gateways are the preffered approach for the enterprise, and the companies that provide services support larger API deployments.
API Connector - Contrary to an API proxy, there are API solutions that are proxyless, while just allowing an API to connect or plugin to the advanced API resources. While proxies work in many situations, allowing APIs to be mediated and transformed into required interfaces, API connectors may be preferred in situations where data should not be routed through proxy machines. API connector solutions only connect to existing API implementations are easily integrated with existing API frameworks as well as web servers like Nginx.
Hosting - Hosting is all about where you are going to park your API. Usual deployments are on-premise within your company or data center, in a public cloud like Amazon Web Services or a hybrid of the two. Most of the existing service providers in the space support all types of hosting, but some companies, who have the required technical talent host their own API platforms. With HTTP being the transport in which modern web APIs put to use, sharing the same infrastructure as web sites, hosting APIs does not take any additional skills or resources, if you already have a web site or application hosting environment.
API Versioning - There are many different approaches to managing different version of web APIs. When embarking on API deployment you will have to make a decision about how each endpoint will be versioned and maintained. Each API service provider offers versioning solutions, but generally it is handled within the API URI or passed as an HTTP header. Versioning is an inevitable part of the API life-cycle and is better to be integrated by design as opposed to waiting until you are forced to make a evolution in your API interface.
Documentation - API documentation is an essential building block for all API endpoints. Quality, up to date documentation is essential for on-boarding developers and ensuring they successfully integrate with an API. Document needs to be derived from quality API designs, kept up to date and made accessible to developers via a portal. There are several tools available for automatically generting documentation and even what is called interactive documentation, that allows for developers to make live calls against an API while exploring the documentation. API documentation is part of every API deployment.
Code Samples - Second to documentation, code samples in a variety of programming languages is essential to a successful API integration. With quality API design, generating samples that can be used across multiple API resources is possible. Many of the emerging API service providers and the same tools that generate API documentation from JSON definitions can also auto generate code samples that can be used by developers. Generation of code samples in a variety of programming languages is a requirement during API deployment.
Scraping - Harvesting or scraping of data from an existing website, content or data source. While we all would like content and data sources to be machine readable, sometimes you have just get your hands dirty and scrape it. While I don't support scraping of content in all scenarios, and business sectors, but in the right situations scraping can provide a perfectly acceptable content or data source for deploying an API.
Container - The new virtualization movement, lead by Docket, and support by Amazon, Google, Red Hat, Microsoft, and many more, is providing new ways to package up APIs, and deploy as small, modular, virtualized containers.
Github - Github provides a simple, but powerful way to support API deployment, allowing for publsihing of a developer portal, documentation, code libraries, TOS, and all your supporting API business building blocks, that are necessary for API effort. At a minimum Github should be used to manage public code libraries, and engage with API consumers using Github's social features.
Terms of Use / Service - Terms of Use provide a legal framework for developers to operate within. They set the stage for the business development relationships that will occur within an API ecosystem. TOS should protect the API owners company, assets and brand, but should also provide assurances for developers who are building businesses on top of an API. Make sure an APIs TOS pass insepection with the lawyers, but also strike a healthy balance within the ecosystem and foster innovation.

If there are any features, service or tools you depend on when deploying your APIs, please let me know at @kinlane. I'm not trying to create an exhaustive list, I just want to get idea for what is available across the providers, and where the gaps are potentially. 

I'm feel like I'm finally getting a handle on the building blocks for API design, deployment, and management, and understanding the overlap in the different areas. I will revisit my design and management building blocks, and evolve my ideas of what my perfect API editor would look like, and how this fits in with API management infrastructure from 3Scale, and even API integration.

Disclosure: 3Scale is an API Evangelist partner.

See The Full Blog Post


What Will It Take To Sell My API AS A Wholesale Resource

I'm continuing my exploration of the possibilities of offering up a wholesale version of an API resource. While wholesale is not an option for all types of APIs, there are a subset of APIs that are more utility in nature and would lend themselves nicely to being sold wholesale to other API providers.

I want to better understand the nuts and bolts of what it will take to offer up APIs in this way, and for this exercise I’m going to explore providing my recent screenshot API as a wholesale API that other API providers could resale alongside their own resources. An API provider could have their own news, content or other resources, and decide it would be more cost effective to resell my screen capture API, rather than design, deploy and provide their own.

I have designed, developed and deployed my screenshot API, now what do I need to make it available wholesale?

  1. Definition - I'll need to have some sort of API definition in API Blueprint, Swagger or RAML to be able to communicate my interface and underlying data model to other providers in a machine readable way that lets them interface with it, as well as potentially develop other tooling around their resale of my resource.
  2. Proxy - I don't think this this one is a requirement, as some providers would prefer to develop their own proxy layer, but providing a proxy harness that other API providers could use to deploy my API, as a resource within their domain would be a nice to have. Providing it in a variety of languages including PHP, Python, Ruby, C#, Java and Node.js would sensible.
  3. Management APIs - To support wholesale interactions I would need a set of my own APIs that providers can use to accomplish common API management features like usage volumes, rate limits, and user management if applicable. These services would have to be available as APIs so that providers could seamlessly integrate into their own API management platform.

That is just a few of the elements I think that I would need to serve up my API in a wholesale way. I might think of more needs as I evolve my thoughts on this, and potentially develop a working prototype around my screenshot API. 

Using these tools, an API provider could come and sign up for wholesale access, deploy a proxy within their domain, and use the API definition to deploy interactive documentation that was seamless with their own documentation. Next I see two distinct scenarios for user management around wholesale APIs. You don’t want users having to sign up for two separate keys, or even know about the wholesale provider in any way. This is where the management APIs would come in, depending on my business goals that surround my wholesale resources I would employ two user scenarios:

  1. User Profile Required - If I wanted to require my API resellers to pass along their user profiles along with API usage I could provide some sort of key translation as part of management APIs and / or as part of the proxy operations. When a new users first uses the API resources, my reseller would have to generate a profile for them in my wholesale system, generating a unique key, and either my system or my resellers would translate keys upon requests. This way I could understand who is using my API resources, and enjoy deeper demographics around API sales.
  2. User Profile Not Required - Maybe, as a whole provider I don't care about understanding who uses my API at this granular level, I just want to sell, sell, sell. This way I could provide a much more simplified process that would just require resellers to sign up for a single API key, and all API requests are tracked under this single provider key. Resellers would manage their own user keys, and just hardcode all requests to my API with their wholesale key via their proxy.

I could understand both of these implementations. Some wholesale providers are going to be obsessed with understanding who is using their API, and require their resellers to be transparent and share their API developer profile data. While I personally think this is overkill, and it would be much simpler to just use a single wholesale key for each reseller--I will assume that most wholesale API providers will go this route.

Next steps for this concept is to actually make it work for real. I have the screenshot API as well as some other similar, utility style APIs, that I wil use as my test cases. I use 3Scale API infrastructure for my API management, so I have APIs for almost all aspects of my API management. I just need to proxy them on my end, and potentially offer up to my prospective API resellers, giving them access to usage, rate limits and user management.

Right now this is just an idea, an academic exercise, but I see no reason this can't be reality and just like other goods and services in the real-world, companies could sell wholesale version of the API resources, further fueling the growth of the API economy. After making this scenario a little more real, I want to think through what this would look like in an on-premise scenario—no proxy involved.

See The Full Blog Post


My Local Storage Node Uses a Disposable Proxy to Connect To The Cloud

We have evolved and matured beyond our early days of cloud computing. We ended up thinking that the cloud was a necessary evil, but it wasn't the early love affair with storing our lives online.

Today I have my own local storage nodes, where I store my photos, music, video and even my DNA sequencing. My nodes are stored on my body, in my home and secret places that only I know.

I can access the information on these nodes manually through physical connections or via Telehash communication chains that I've established, allowing only my nodes to talk to each other and connect to networks I've deemed acceptable.

When I connect one of my nodes to the cloud (which I do from time to time), I use a disposable proxy that allows me connect and transfer data, then dispose of the device, and the address of the device that I used. My goal is to only enter the World Wide Web via doorways that go away as soon as I'm done with them, with no trace.

This approach gives me the amount of control over my data that I desire, while also being able to transport information across mesh networks and even the open Internet. Disposable proxies represent the future of the Internet and how we modulate and bridge our personal and online self, in a way that gives us the highest level of control.

See The Full Blog Post


Deploy and Manage API on Amazon Web Services (AWS)

For the longest time I would get asked, "Which API service provider should I use to deploy my APIs?". This was a tough question, because historically the API management providers don't help you deploy your APIs, they only help you manage them.

Deploying your APIs was up to you. Generally you already had some sort of internal system that you would use to generate RESTful interfaces or you'd go find your own open source API framework and deploy. Then you'd proxy or connect your API to one of the API service providers.

These lines are now blurred by providers like Intel with their enterprise API gateway, and through API deployment resources from 3Scale. 3Scale is investing in open source server technology for NGINX, and blueprints for API deployment using Amazon Web Services.

3Scale recently published a quickstart tutorial on how to deploy an API on Amazon EC2 for Amazon Web Services (AWS), and manage it using 3Scale API management. My favorite part is that everything in this tutorial is completely FREE. A critical element to experimenting with APIs.

The 3Scale, AWS walk-though provides details for:

  • Creating and configuring EC2 Instance
  • Preparing Instance for Deployment
  • Deploying a demo API solution
  • Enabling API Management with 3Scale
  • Implementing an Nginx Proxy for Access Control

There are several important things going on here, beyond being able to do this for free with an entry level AWS and 3Scale accounts. But I can’t emphasize enough, the value of this being free and allowing you to explore, experiment and iterate with your API--without spending a fortune! This is critical to not just your API initiative, but contributes to a more healthy API space in general.

After being free, you are using proven open source technology like Ubuntu for server OS and NGINX for your web server. 3Scale has invested in tools for NGINX, rather than building their own proprietary solutions, because NGINX has been proven to deliver at scale and has large community to support it.

Third, with this solution, you retain control over your infrastructure. You are deploying on the proven Amazon cloud (which I hope you are already using in other areas), and you are connecting to free API management services with the opportunity to buy premium services. You are not proxying all your data and valuable API resources through a 3rd party proxy. You are connecting to the API management services you need like rate limits, access controls and analytics without giving up control over your data and resources.

I’m going to do the 3Scale API deployment tutorial myself, so that I know the process inside and out, have my own AMIs ready to deploy on AWS, and be able to walk others through, when possible.

See The Full Blog Post


The API Evangelist Toolbox

I've spent a lot of time lately looking for new tools that will help you plan, develop, deploy and manage APIs.  My goal is to keep refining the API Evangelist Tool section to provide complete API tool directory you can filter by language or other tag.  

I've added a number of open source tools to my database lately.  But I know there are many more out  there.  So I put out on the Twitterz that I was looking for anything that is missing. Here is what I got:

Resulting in the following tools being suggested:

Carte - Carte is a simple Jekyll based documentation website for APIs. It is designed as a boilerplate to build your own documentation and is heavily inspired from Swagger and I/O docs. Fork it, add specifications for your APIs calls and customize the theme. Go ahead, see if we care.
Charles Proxy - Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).
Fiddler - Fiddler is a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect traffic, set breakpoints, and "fiddle" with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.
foauth.org: OAuth for one - OAuth is a great idea for interaction between big sites with lots of users. But, as one of those users, it’s a pretty terrible way to get at your own data. That’s where foauth.org comes in, giving you access to these services in three easy steps.
Hurl - Hurl makes HTTP requests. Enter a URL, set some headers, view the response, then share it with others. Perfect for demoing and debugging APIs.
httpbin: HTTP Request & Response Service - Testing an HTTP Library can become difficult sometimes. PostBin.org is fantastic for testing POST requests, but not much else. This exists to cover all kinds of HTTP scenarios. Additional endpoints are being considered. All endpoint responses are JSON-encoded.
InspectB.in - InspectBin is based on the idea of RequestBin (requestb.in), set your http client or webhook to point to your InspectBin url. We will collect http requests and show it in a nice and friendly way, live!
I/O Docs - I/O Docs is a live interactive documentation system for RESTful web APIs. By defining APIs at the resource, method and parameter levels in a JSON schema, I/O Docs will generate a JavaScript client interface. API calls can be executed from this interface, which are then proxied through the I/O Docs server with payload data cleanly formatted (pretty-printed if JSON or XML).
localtunnel - The easiest way to share localhost web servers to the rest of the world.
Postman - REST Client - Postman helps you be more efficient while working with APIs. Postman is a scratch-your-own-itch project. The need for it arose… Postman helps you be more efficient while working with APIs. Postman is a scratch-your-own-itch project. The need for it arose while one of the developers was creating an API for his project. After looking around for a number of tools, nothing felt just right. The primary features added initially were a history of sent requests and collections. A number of other features have been added since then. Here is a small list.
RequestBin - RequestBin lets you create a URL that will collect requests made to it, then let you inspect them in a human-friendly way. Use RequestBin to see what your HTTP client is sending or to look at webhook requests.
Runscope - OAuth2 Token Generator - Tools for developers consuming APIs in their mobile and web apps.

All tools have been added to the API Evangelist toolbox.  As I continue to work with and define, I will add more meta data that will help you find the tool your looking for. 

Thanks John Sheehan (@johnsheehan),  Phil Leggetter (@leggetter) and Darrel Miller (@darrel_miller). 

See The Full Blog Post


Updated Google Cloud Print Website

Google updated the Google Cloud Print website today with all new documentation, code samples and other goodies to help get up and running using Google Cloud Print.

The restructured site pulls together months of learning from behind the scenes by Google, partners and developers on the GCP platform.

The new site starts with a introduction to Google Cloud Print and walks you through each of the components of the Google Cloud Print architecture:
  • Applications - Any type of application that enable users to print via Google Cloud Print such as web apps, desktop or mobile applications.
  • Google Cloud Print Services - Google's API allowing registering of printers, sharing of printers and sending of print jobs to these printers via applications.
  • User Interface - A set of common web interfaces developed by Google that allow users to manage their Google Cloud Print services.
  • Printers - Currently defined by cloud ready and non-cloud printing devices.
    • Cloud Ready Printers - A new generation of printers with native support for connecting to cloud print services.
    • Non-Cloud Printers - All other legacy printers that still connect to devices via PCs and network connections.
  • Google Chrome OS Printing - Google's new web operating system where cloud printing is the default print interface, and there is no option for local printing.

The new Google Cloud Print site provides two main areas for integration with the Google Cloud Print Services:
  • Submitting Print Jobs
    • GCP Web Elements - A JavaScript widget enable simple printing of a PDF file using Google Cloud Print
    • Android Integration - Tools for submitting print jobs via Android mobile and tablet platforms
    • Services Interface - API documentation for the /submit, /jobs, /deletejob, /printer, /search interfaces, allowing seamless print integration
  • Receiving Print Jobs
    • Developer Guide - A guide that covers registering printers, handling printers and print jobs on the Google Cloud Print platform.
    • XMPP Handshake Flow - Details on the XMPP print job notification system used for notifying "printers" of of new print jobs in real time.
    • Services Interface - API documentation for the /control, /delete, /fetch, /list, /register, and /update interfaces, allowing seamless print integration
The new Google Cloud Print update provides much more complete documentation on submitting print jobs and the XMPP integration for print job notification.

They also provide python code samples for integrating with GCP. You can also use the Mimeo PHP Google Cloud Print samples that I wrote earlier this year, and I will be working on a set of C# samples also, and publish when ready.

See The Full Blog Post


API Proxies, Connectors, and Add-Ons

I'm working through some 100K views of the API service provider arena, and trying to evolve my perspectives of Whats Next for APIs.

This is all a work in progress, so I'm publishing here on kinlane.com instead of on apievangelist.com.

I wrote the other day about the Battle for Your API Proxy between the API service providers.

This included a tier of "proxy" API service providers that run all your APIs through proxy before hitting your API.

Next there are the group of "connector" API service providers that provide you with a connector to put in your API and provide the same services that a proxy would.

Based upon what I'm seeing with Mashape and other indicators, I tried to show the playing field in a slightly different, and evolved way.

My vision of the future of APIs involves several key areas of evolution. These are based upon movements I'm already seeing, and where I'd like to see things go.

In this vision API service providers don't just provide proxy, connector, management, developer area tools, and API marketplaces. They also provide actual API frameworks for APIs like Mashape provides, as well industry wide developer opportunities.

Developers can build code against a single API, or multiple APIs, they can build tools for API owners to deploy in their own management, developer areas as well as within the "proxy" or "connector" layer too. Service providers will provide developers with distribution opportunities to other marketplaces and API owners.

API Owners will not be locked into a single API service provider for their API, Management, Proxy, Connector, Developer or Marketplace needs. They will have a buffet of add-ons they can choose from to enhance every aspect of their API ecosystem.

One key difference is that API owners can choose to proxy or connect their API, or both if necessary for different services.

This model provide sadd-ons at every layer of the API ecosystem. If a developer builds a set of tools for video streaming, it can be deployed at the proxy / connector, management, API, and developers area. Billing for a video API might look radically different then billing for a print API.

This will provide the type of innovation that is needed at this stage of the game. A nice selection of tools for API owners to choose from, with service provider and developers making money.

Related articles

See The Full Blog Post


Google Releases Two New APIs for Chrome Extensions

In the latest Chrome Beta release, Google made available two new experimental extension APIs: the Web Navigation and Proxy Extension APIs.
  • Web Navigation Extension API - Allows extension developers to observe browser navigation events. The API therefore allows an extension to keep track of exactly what page the tab is showing, and how the user got there.
  • Proxy Extension API - Allows users to configure Chrome's proxy settings via extensions. Proxies can be configured for the entire browser or independently for regular and incognito windows.
You can test drive these new APIs by enabling Experimental Extension APIs.

Until the APIs are stable, they require explicit permission from users.

See The Full Blog Post


Submit Google Cloud Print Job Example in C#

I have been working on my Google Cloud Print Proxy software in PHP to make available for a wider audience.

When I get more time I intend to work on Ruby and Python versions.

Much of the work being done with the Google Cloud Print Services Interface is being done by major printer manufacturers in private.

I see some of this, through the conversation on the developers wiki, but they tend to not share very much

One developer shared his new C# example for submitting a Google Cloud Print Job last night. I wanted to share with everyone.

See The Full Blog Post


Secure Printing with Google Cloud Print

I've been doing a lot of work with the Google Cloud Print API lately. I've built a prototype Google Cloud Print Proxy, and I am trying to push the boundaries of what can be done with Google Cloud Print.

I'm engineering cloud print solutions for commercial printing, and along the way Im also doing a lot of research around device printing.

When it comes to device printing I keep seeing a concept called follow me printing or pull printing. Where you can queue up a print job from your computer, and then go to printer and print from the device.

Think of this in terms of medical records or other secure documents. You don't want to print down the hall, then have it take 5 minutes to walk there and get it.

With mobile phones, and Google Cloud Printing you can have your document in Google Docs, walk to your target printer, pull out your smart phone and print your document.

You no longer need the device have a pull printing interface. Google Cloud Print and your mobile phone handle securely printing your documents when your ready.

See The Full Blog Post


Managing Google Cloud Printers

I have been getting a lot of questions from users regarding how to manage their Google Cloud Printers.

To help guide users stumbling across my site, there are two types of Google Cloud Print Users:
  • End Users - You are installing Google Cloud Print on your Windows machine to print to it on your mobile phone.
  • Developers - Developers building Google Cloud Print Proxies and Cloud Print Aware Printers.
I'm in the developer camp. My register cloud printer or my delete cloud printer blog posts won't help end users much, they are for developers.

If you are looking to manage your cloud printers and print jobs, go to the Google Cloud Print Interface. It lets you:
  • See your Google Cloud Printers
  • Delete your Google Cloud Printers
  • See shared Google Cloud Printers
TheGoogle Cloud Print Interface also lets you manage print jobs for your Google Cloud Printers:
  • See active print jobs
  • Delete active print jobs
  • See completed print jobs
  • Delete completed print jobs
If you are an end user theGoogle Cloud Print Interface is where you want to me. You won't get far with my Google Cloud Printer PHP tutorials.

Also, if you want to uninstall Google Cloud Print from your machine, you will have to do that under your Control Panel on your Windows workstation.

See The Full Blog Post


Google Cloud Print Job Notifications Using XMPP

I made it over the last hurdle building my Google Cloud Print Proxy in PHP.

I have a prototype for the Google Cloud Print, XMPP Print Job Notification Service.

Even though you can pull print jobs via the GCP /fetch interface, Google requires cloud print proxies to receive print job notifications via an XMPP push, rather than constantly polling the /fetch interface.

It make sense. Using XMPP is a great way to receive API notifications, and minimize requests to an API. Companies like Superfeedr are usingPubSubHubbub for API notifications.

I am building on top the XMPPHP library for my Google Cloud Print XMPP Service.

First I authenticate with the Google Talk Service using Client Login and ChromiumSync Service.

Then I setup a persistent XMPP connection with the Google Talk Service on behalf of my Google Cloud Print user, for my Printer Proxy.

When you establish persistent connection with Google Talk Service using XMPP it will return a Full JID and Bare JID for your persistent XMPP connection.

Once the session begins you send the following subscription stanza:

The service will acknowledge your subscription by returning the following:

Once subscribed, my persistent XMPP will handle and process any messages from cloudprint.google.com notifying of new cloud print jobs for users.
I still have a lot of work to do to make the Google Cloud Print Proxy handle jobs flawlessly.

I need to setup the XMPP print jobs notifications to handle print jobs for all users that have a print proxy registered.

The XMPP print job notification service has to be running 24 / 7 and initiate the print job /fetch via API, and the commercial print process for each job.

Once I get this cleaned up a little more I will publish a PHP class for working with Google Cloud Print Services Interface and XMPP Print Job Notifications to Github.

See The Full Blog Post


File Formats for Google Docs API

I am defining various aspects of the Google Docs platform in conjunction with my Google Cloud Print Proxy work.

I need to know what is possible via the Google Docs Listing API for managing documents prior to cloud printing.

I am focusing on what file types and sizes are viable through the Google Docs Web Interface, Google Docs Viewer and the Google Docs Listing API.

The following formats are available for use in the Google Docs Viewer:
  • Microsoft Word (.DOC, .DOCX)
  • Microsoft Excel (.XLS, .XLSX)
  • Microsoft PowerPoint 2007 / 2010 (.PPT, .PPTX)
  • Apple Pages (.PAGES)
  • Adobe Illustrator (.AI)
  • Adobe Photoshop (.PSD)
  • Autodesk AutoCad (.DXF)
  • Scalable Vector Graphics (.SVG)
  • PostScript (.EPS, .PS)
  • TrueType (.TTF)
  • XML Paper Specification (.XPS)
With Google Docs Business Accounts any document type can be uploaded via the Google Doc Web Interface, and any document can be created or uploaded via the Google Docs List API.

There are specific documents types that can be uploaded as Google Documents.

Google Documents

You can import in the following file types for Google Docs:
  • .gif - image/gif - Graphics Interchange Format
  • .odt - application/vnd.openxmlformats-officedocument.wordprocessingml.document - Open Document Format
  • .doc - application/msword - Microsoft Word Format
  • .jpeg - image/jpeg - Joint Photographic Experts Group Image Format
  • .html - text/html - HTML Format
  • .txt - text/plain - TXT File
  • .pdf - application/pdf - Portable Document Format
  • .png - image/png - Portable Networks Graphic Image Format
  • .rtf - application/rtf - Rich Format
Google Docs have a maximum import file size of 512000 KB or 512 MB.

You can export in the following file types for Google Docs:
  • .doc - application/msword - Microsoft Word Format
  • .html - text/html - HTML Format
  • .jpeg - image/jpeg - Joint Photographic Experts Group Image Format
  • .odt - application/vnd.openxmlformats-officedocument.wordprocessingml.document - Open Document Forma
  • .pdf - application/pdf - Portable Document Format
  • .png - image/png - Portable Networks Graphic Image Format
  • .rtf - application/rtf - Rich Format
  • .svg Scalable Vector Graphics Image Format
  • .txt - text/plain - TXT File
  • zip ZIP archive. Contains the images (if any) used in the document
Google Drawings

You can import in the following file types for Google Drawings:
  • .wmf - image/x-wmf - Windows Metafile
Google Drawings have a maximum import file size of 512000 KB or 512 MB.

You can export in the following file types for Google Drawings:
  • .jpeg - image/jpeg - Joint Photographic Experts Group Image Format
  • .pdf - application/pdf - Portable Document Format
  • .png - image/png - Portable Networks Graphic Image Format
  • .svg - image/svg+xml - Scalable Vector Graphics Image Format
Google Presentations

You can import in the following file types for Google Presentations:
  • .ppt - application/vnd.ms-powerpoint - Powerpoint Format
  • .pps - application/vnd.ms-powerpoint - Powerpoint Format
Google Presentations have a maximum import file size of 1048576 KB or 1049 MB.

You can export in the following file types for Google Presentations:
  • .pdf - application/pdf - Portable Document Format
  • .png - image/png - Portable Networks Graphic Image Format
  • .ppt - application/vnd.ms-powerpoint - Powerpoint Format
  • swf - application/x-shockwave-flash - Flash Format
  • .txt - text/plain - TXT File
Google Spreadsheets

You can import in the following file types for Google Spreadsheets:
  • .ods - application/vnd.openxmlformats-officedocument.spreadsheetml.sheet - ODS (Open Document Spreadsheet)
  • .xls - application/vnd.ms-excel - XLS (Microsoft Excel)
  • .txt - text/plain - TXT File
  • . tsv - text/tab-separated-values - TSV (Tab Separated Value)
Google Spreadsheets have a maximum import file size of 10486760 KB or 10486 MB.

You can export in the following file types for Google Spreadsheets:
  • .xls - application/vnd.ms-excel - XLS (Microsoft Excel)
  • .csv CSV (Comma Seperated Value)
  • .png - image/png - Portable Networks Graphic Image Format
  • .ods - application/vnd.openxmlformats-officedocument.spreadsheetml.sheet - ODS (Open Document Spreadsheet)
  • . tsv - text/tab-separated-values - TSV (Tab Separated Value)
  • .html - text/html - HTML Format
All other types of files uploaded to Google Docs for storage, and if they are not converted into a Google Document, can be a maximum of 1GB. PDF documents uploaded to Google Docs can only be exported as PDF documents.

With the ability to upload ANY file type to Google Docs with a size of up to 1GB, it makes it a very viable Cloud Storage platform.

The number of documents Google has support for uploading and exporting via the Google Docs Listing API makes it an extremely viable document conversion platform.

Also Google provides advanced document management tools such as OCR and language translation on upload, along with resumable uploads.

Throw in mobile viewing, editing and cloud printing of documents. You have thebuildingblocks for a pretty amazing cloud document management system.

See The Full Blog Post


Google Cloud Print Proxy (Cloud Printer)

I have some working code for a Google Cloud Print Proxy. It is currently written in PHP and uses the Zend framework.

I have written specific blog posts for each service endpoint, and to finish up I wanted to do a complete walk-through.

First I authenticate against a users Google Account with Google ClientLogin API.

Then using the Google Cloud Print Services Interface: I can make calls to the following end points to manage printers: I can make calls to the following end points to manage cloud print jobs: You can receive print job notifications via persistent XMPP connection: If you want to download the sample code for my Google Cloud Print Proxy work, you can download in the following formats: If you have any thoughts or ideas for an innovative Cloud Print Proxy, let me know.

 

UPDATE 2/28/2011 - I have finished the first prototype for the XMPP print job notification service. This is critical piece to eliminate constant polling of /fetch service.

See The Full Blog Post


Google Cloud Print - Control

The whole point of deploying a Google Cloud Print Proxy is to be able to manage print jobs.

After authenticating using Google ClientLogin API, you can then fetch an existing Google Cloud Print Job from the users queue.

Each print job has an ID that you can use to reference the print job and control the status with /control service.

The Google Cloud Print /control endpoint accepts the following parameter(s): '

  • jobid - Unique job identification (generated by server).
  • status - Status of the job, which can be one of the following:
    • QUEUED: Job just added and has not yet been downloaded.
    • SPOOLED: Job downloaded and has been added to the client-side native printer queue.
    • DONE: Job printed successfully.
    • ERROR: Job cannot be printed due to an error.
  • code - Error code string or integer (as returned by the printer or OS) if the status is ERROR.
  • message - Error message string (as returned by the printer or OS) if the status is ERROR

Here is an example JSON response:

The /control interface can be used by the proxy to update Google Cloud Print about the status of the print job for a users cloud printer.

This interface is not used for any control, disabling, or filtering of the print job or the printer.

See The Full Blog Post


Google Cloud Print Proxy - Summary

Ok that concludes my Google Cloud Print Proxy work from the week.

I've tried to summarize the work I've done in a series of blog posts and code samples.

Here is a summary of what I've done to date with the Google Cloud Print API: With this you should be able to create a Google Cloud Print proxy in PHP. Each sample has supporting code samples in the form of Github Gists.

I will publish the entire set of code samples plus a PHP class when done.

I will also publish samples for managing Google Cloud Print jobs using the Google Cloud Print services interface.

See The Full Blog Post


Introduction to the Google Cloud Print Services Interface

The Google Cloud Print services interface or Google Cloud Print API is where the whole cloud print thing starts getting cool.

The Google Cloud Print service interfaces allow you to create a cloud print proxy that gives you a virtual cloud printer you can send jobs to. I'm developing a PHP / MySQL proxy that enables me to register a virtual cloud printer with Google Cloud Print (GCP) registry. Once the printer is registered with the service, it can then receive jobs from and communicate status with Google Cloud Print.

Google defines a Cloud Print Proxy as: A cloud print proxy can be a piece of software that runs on a computer connected to a non-cloud-aware printer, a small add-on hardware device that contains the proxy interface and connects to the printer, or firmware that is built in to printers of the future.

I want to evolve the proxy definition beyond just hardware, I want to proxy Google Cloud Print jobs and translate them into anything, but first what is the GCP services interface?

The URL for the GCP services interface is: Currently GCP services interface uses Google client login for installed apps, although I don't see any reason it can't use oAuth for Web Apps. Both types of authentication give you access to a users Google Account services, the oAuth for Web Apps is cleaner, and doesn't require you ask for a login.

The Auth token retrieved from authentication will need to be included in all requests using the HTTP header Authorization: GoogleLogin auth={auth_token}.

In addition, all requests to the Google Cloud Print server will need to incldue the HTTP header: X-CloudPrint-Proxy: {OEM_ID}

Once you are authenticated with a users account you can begin to make calls against the GCP services interface. Google provides the following cloud print endpoints:
  • /control - Allows proxy control over the status of a cloud print job.
  • /delete - Allows proxy to delete a printer from Google Cloud Print (GCP) registry.
  • /fetch - Allows proxy to fetch the next available job for the specified cloud printer.
  • /list - Provides proxy a listing of all the printers for the given users Google Account.
  • /register - Allows proxy to register printers.
  • /update - Allows proxy to update various attributes and parameters of the printer registered with Google Cloud Print.
The GCP services interface gives you access to everything you for managing Google Cloud printers and jobs. Google Cloud Print can also provide print job availability notification through Google Talk, using a persistent XMPP connection. I have a working XMPP script, because my focus goes beyond instant gratification from a local printer, into more a commercial cloud print format, I'm not implementing the XMPP jobs management and just relying on the /fetch and /control endpoints to manage jobs.

That concludes a quick overview of the GCP services interface, I will be publishing specific Google Cloud Print code samples for each step shortly. You can also find a Google Cloud Print Developers Guide and Google Cloud Print Services Interface Guide over at Google Code.

See The Full Blog Post


Introduction to Google Cloud Print

I have been studying Google Cloud Print since they announced it last year. Even though you don't catch me printing very often, I am fascinated by the technology and its potential. Google Cloud Print enables any application (web, desktop, mobile) on any device to print to any printer. Applications submit print jobs to the cloud print service via the API offered by Google.

Google Cloud Print then sends the print job to the selected printer which the user has previously registered with the service. New printers which are called "cloud-aware" and connect directly to the cloud print service, while legacy printers will use a Google Cloud Print Proxy.

Currently Google provides a proxy that works on Windows with Google Chrome. They will be supporting Mac and Linux versions in the near future.

All of this is fascinating and I see the potential for changing how users interact with their home and office printers, and extend these printers reach on to the mobile web.

However, I'm interested in going further with this. I'm spending time working with the Google Cloud Print Services API and building a print proxy that can be used in many different situations, beyond interfacing with physical printers.

See The Full Blog Post


Google Cloud Print - Register

See The Full Blog Post