AWS has great documentation specifying the features supported by both REST and HTTP APIs which you can view here. This post is mostly my high-level thoughts on the implementation of APIs under each of these modes and common use cases for each. I've also tried to condense the information and recommendations available in the links from the AWS documentation. There are more differences than are listed here, these are just the ones that have been important to me when choosing the type of API.
The basic TLDR; is that REST APIs support more features than HTTP APIs, however, they come at a higher price both financially and complexity-wise.
When we're building an API, one of the first objectives we have is to secure the API, even if the API is internally facing only, to apply defence in depth and the principle of least privilege. We are going to break the use cases up between internally facing and externally facing APIs for this.
Externally Facing APIs
In my experience, the most common use case for API Gateway is for creating externally facing APIs for your clients to consume. There are two simple integrations available for authorising users to your APIs; Cognito and JWT authorisers. If you're using AWS Cognito as your identity provider then the authorisation on both platforms is similar in functionality but different in execution. With REST APIs there is a native cognito connector however with HTTP APIs you must use the generic JWT authoriser and set up the Cognito User Pool as the issuer. If your authentication case is simple (i.e. only using a single issuer and scopes for AuthZ), and you're using a third-party authentication provider, then HTTP APIs support a zero-cost authenticator method through the use of the generic JWT authoriser. REST APIs do not have this integration, to validate third-party JWTs you must use a lambda authoriser. This brings us to our last authorisation method, lambda authorisers are supported on both HTTP and REST APIs and support authenticating the request using any custom logic you can write in AWS Lambda. This gives you the greatest flexibility over your authorisation process but is also the most expensive method as you will be billed for Lambda execution time and invocations in addition to API Gateway charges. Lambda authorisers themselves can be defined as two separate types, token-based, which considers a single field from the request as the authorisation context, and request-based, which allows you to make a decision based on the entire request.
Internally Facing APIs
For internally facing APIs I typically like to use IAM policies to decide who can invoke my API. Between REST APIs and HTTP APIs, it's important to note that both can use IAM policies to grant access to invoke the APIs using an IAM policy attached to a Role, Group or User. However, in REST APIs you also can attach resource policies directly to the resources. This allows for advanced configuration and is useful in scenarios where you might be calling an API Gateway across a wide range of principles and want to have fine-grained access control on the routes. For example, you might want to restrict access to various parts of your API to specific principles, CIDR ranges, and VPCs, or to require MFA. As a general rule of thumb, if you're setting up single-account, internal APIs that will be accessed over the internet with IAM, then HTTP APIs are likely going to support your needs. As the complexity of your cloud estate grows and you have multiple teams and accounts consuming these APIs you might find you need the additional configuration available through resource-based policies in REST APIs. For internally facing APIs, the method of distribution also changes your access patterns, so make sure you check the distribution section of this article to make sure you can consume your endpoint in your preferred way.
This doesn't typically come up, but in some cases, it is required. Both HTTP and REST APIs support terminating client mTLS at the gateway. However, only REST APIs allow you to use certificates for downstream mTLS between API Gateway and your integrated services. Therefore, if you are using Public Key Infrastructure (PKI) internally, a REST API may be required to ensure compliance with internal authorisation methods.
This is where we start to see some large differences between the API options, both APIs support deployment within a specific AWS Region (Regional Endpoint). However, REST APIs also support Edge Optimised and Private Endpoints too.
Regional API Endpoints
This is the most basic type of externally addressable API. Supported by both REST and HTTP APIs, it supports the simple use case of access being scoped to a single region and is useful where the API is being consumed by in-region resources or clients that are not geographically distributed. Any API deployed using this type has a few caveats, it will not use CloudFront Points of Presence to route traffic from the consumer to your API, all traffic will instead travel through the public internet to reach your API. It will also always be publically addressable, and it is your responsibility to provide authorisation to incoming requests. If you have multiple regions containing your API, you can use latency-based routing with Route 53 to direct users to the nearest region.
Edge Optimised API Endpoints
Edge-optimised endpoints are only available on REST APIs, the basic premise is it places an AWS-managed Cloudfront Distribution in front of your API to accelerate access to an API hosted out of a single AWS region. Instead of traversing the public internet, requests will travel from the user's closest AWS CloudFront PoP to your API. A similar setup can be achieved with an HTTP Regional API behind a customer-managed CloudFront distribution, however, it is important to note that your API will still be directly addressable and your complexity will increase due to more directly managed resources.
Private API Endpoints
REST APIs can also be deployed privately, this means that they are not publically addressable at all, and instead must be accessed through an interface VPC endpoint. This has the advantage of ensuring that your private API is accessible only by the services that you choose, granting it a much greater shield from public attacks than regional or edge optimised. If your APIs are only ever going to be consumed from inside your VPCs then this option provides REST APIs with a significant security advantage over HTTP APIs.
Aside from the options we've already discussed above and the varying levels of protection they offer, two more methods in which we can protect APIs are through Web Application Firewalls (WAFs) and throttling. These areas are another place where the implementations can differ greatly and thus may influence your decision.
Web Application Firewalls
REST APIs can be associated with AWS WAF regional Web Access Control Lists (ACLs). This allows for pre-API stage filtering of traffic that matches common patterns of known web exploits such as SQL injection and cross-site scripting. You can also use the WAF to create custom rules around known bad IPs, scrapers, and specific request properties. Another core feature of the WAF is that it allows you to define rate-based rules to throttle requests by IP. This is not supported by HTTP APIs.
Whilst WAFs can be used to create complex throttling rules for REST APIs, both REST APIs and HTTP APIs have some internal throttling capability. HTTP APIs can throttle requests at an account level and route level, however, these rates are global across all clients of the API, and there's no way to scope these rates to specific users. REST APIs on the other hand, whilst still allowing you to provide account level throttling, also allow you to distribute API keys to users of your API. These keys can then be used to set per-consumer usage plans and apply throttling and quotas at a user level. It is important to note that AWS does not recommend using the API key functionality as your sole source of authorisation, instead, you should use finer-grained permissions using a downstream authoriser.
Both types of API will integrate with Lambda, AWS service actions, public HTTP endpoints, and private integrations with Network Load Balancers (NLBs). HTTP APIs also support private integrations with Application Load Balancers (ALBs), whilst REST APIs support mock integrations. Typically I find that I can get away with the layer 4 routing provided by NLBs, as the extra layer 7 functionality provided by ALBs is normally also found in API GW. The mock integration I've found especially useful for testing API GW properties in isolation (eg. Auth).
Both REST and HTTP APIs support logging to CloudWatch, for more complex use cases where you might have a security lake or are using services like X-Ray to trace individual requests then REST APIs are the clear winner with the ability to stream logs into kinesis firehose, and native X-Ray support.
Generally, I find that REST APIs are typically worth the extra cost and complexity to set up. Firstly because despite being more than twice the price, API requests are still ridiculously cheap. Secondly, the ability to add security features like WAF and API key throttling is likely not going to affect you in the short run, but the horror stories of monster AWS bills make it worth it. Finally, whenever I've built an HTTP API eventually I've needed a feature in REST APIs anyway and the rework can generally be saved.
I would however recommend HTTP APIs in the scenario that you are going to have a publically accessible API in a single geographic region with an access pattern that is unlikely to evolve, doesn't require any REST API-specific features, and uses the free basic JWT or IAM authorisers. Anything else, I would caution you to think about the consequences of someone launching a large number of requests against your API and the latency for geographically distributed users.