Overview

This is a personal note on hosting Hugging Face models on AWS Lambda for serverless inference, based on the following article.

https://aws.amazon.com/jp/blogs/compute/hosting-hugging-face-models-on-aws-lambda/

Additionally, I cover providing an API using Lambda function URLs and CloudFront.

Hosting Hugging Face Models on AWS Lambda

Preparation

For this section, I referred to the document introduced at the beginning.

https://aws.amazon.com/jp/blogs/compute/hosting-hugging-face-models-on-aws-lambda/

First, run the following commands. I created a virtual environment called venv, but this is not strictly required.

#gc#ps#p#cCidyoidltCtuIpBkozrhrnonceeocsiobelranetntoootassotn-e-vltttheamelarsedanlathmnvvtlprptide/hartnnbe-tpopiavirhjsscnree:ttv/erc/rieaqeCt/avncuqDgtavtiuKtiitiri.otoeverhnadeTyu-vtmhobiiedeiu.nrensrcftptoeuescdmran.oe/eldtmanexmewcentalsencno--vidpswiemairspemto:rnphnotl-mveaeieswnsn/stiz-oielnrrasoomn-btmadhedaenm-tifinoniris-tthiruaagltgiironengs--oifunarfcceeersennceee-dweidthb-yawtsh-elaCmDbKdat-ofopre-rhfuogrgmindge-pflaocyem.egnitts:

Note

The documentation states that you should run cdk deploy after this. As a result, I was able to deploy and run Lambda tests. However, when I issued and used the Lambda function URL described later, several errors occurred. Therefore, the following modifications are needed.

inference/*.py

When using via the function URL, parameters are stored in queryStringParameters, so processing for this needs to be added. If using POST requests, further modifications are required.

Below is an example of changes to sentiment.py. By changing the pipeline arguments, it is also possible to perform inference based on your own custom model.

"CS"ifnd "oP"mrle #"pD"popf yXom "r-r=h ierb}riLtta fleoegirpn ssdthcjaid "epyutesnpl#qt:to"""rnoseeueens:bnAsnflrAexxstomeoi(drtteandra-rnedytlyezImeveS===up"sode(edts(:pner"nree{Ceo.nsstivvovnncte,needelsoiingnnenpemfmtcPtt"t(,ipioa[[:[teomnr""'eIrretaqt2txn:tnemue0etctxeex0x).Mp-ttrt,t[Iia)ey"'0oTpn:rS]]]r-east)0ll"r[#iiyi0tnsin]AseingdsPda"eaef)vrdfeainmltei:taetress".][A"ltlexRti"g]htsReserved.

inference/Dockerfile

Next, add a character encoding specification to the Dockerfile.

FE#A#W#CE#CRNROONMOVIGSRCPTTDMneKoYRhoPcFtDpYi[mhYlUIyPsiuTuNwR-O"tgHdCoifIwstgOeTr$nrNieeiNIk{oTlndnIgOiFtmlt)gOlNnUh=[ifEo_gNebgmaNbDCu"eecCaIdTbiptneOlRiIulyt/DrOidtr.tIaeNl-hehrNrc_tiopaaGgtDmnlnn=oIda3ads"irReg"clfuny}pe,eeotedrrfttn$""m8hod{-be"ieFmy]rsfnU"s#ucN,t-sniChpAtceT"eydatsIatdgiOwpoeeoNsrrdn_locoDaphfrIme-oRbrcto}dphtahue$rad{inbiFcdurU"lieNelcC]rdtToIbrOyyN_tDhIeR}CDKscript

Deploy

After that, run the following to deploy.

cdkdeploy

Lambda Function URL

The following article is helpful as a reference.

https://dev.classmethod.jp/articles/integrate-aws-lambda-with-cloudfront/

Create a function URL from the Lambda console. This time, I simply specified “NONE” for the “Auth type” and enabled “Configure cross-origin resource sharing (CORS)”.

Additionally, I selected “*” (all) for “Allow methods”.

As a result, you can execute inference from a URL like the following.

https://XXX.lambda-url.us-east-1.on.aws/?text=i am happy

The following result is obtained.

{}""}sbto""adlstyacu"bos:erCleo{""d::e""0:P.O92S90I90T8,I8V0E1"9,46640015

CloudFront

For this configuration as well, the following article is helpful as a reference.

https://dev.classmethod.jp/articles/integrate-aws-lambda-with-cloudfront/

The important point is to set Query strings to All in the Origin request settings. Without this, the same result will be returned even when the URL query string is changed.

As a result, you can execute inference from a URL like the following. It is also possible to assign a custom domain.

https://yyy.cloudfront.net/?text=i am happy

Summary

I introduced how to host Hugging Face models on AWS Lambda for serverless inference. There may be some inaccuracies, but I hope this serves as a useful reference.