Mobile selective build

08/06/2021 16 min Episodio 25
Mobile selective build

Listen "Mobile selective build"

Episode Synopsis

What is mobile selective build? Why are we so obsessed with reducing binary size? How does selective build work? Why doesn't static linking just work? Why can't you just read out the ops used in a TorchScript model to determine what operators you actually need? What are the tradeoffs of statically determining the operator dependency graph versus tracing? What's up with the SELECTIVE_NAME macro? How the heck does selective build work at all when you have multiple mobile apps in a single Buck build system? What takeaways should I have as a regular PyTorch developer?Further reading:Official open source mobile documentation on custom selective builds https://pytorch.org/mobile/android/#custom-buildHow to rebuild the op dependency yaml https://github.com/pytorch/pytorch/blob/master/tools/code_analyzer/build.shLiner notes: binary size is premium; ship only what you actually needbig idea: get the ops your model needs -> apply this to build of pytorchget the ops your model needs TorchScript ~> read it out directly from the model itselfbut what if ops use other ops?need a dependency graph. done with static analysis llvm (jiakai) ~> with a (possibly inaccurate) yaml checked in for easy kickstart if you don't want to run the pass (updated by bot, not operational since Feb, recommend rebuilding from scratch if you run into trouble)other possibility: dynamic tracingpro: no need for dependency graph, just look at what was called; works for dtypescon: need representative inputs, if control flow might not cover everythingapply this to build of pytorchordinarily: static linking ensures stuff that isn't used gets prunedbut this doesn't work with distributed operator registration based on static initializershow?codegen - just don't generate itno codegen - SELECTIVE_NAME - C++ doesn't support string in macrobuild system integrationbuck constraint: only one librarytherefore: generate multiple copies of glue libraryalt: atomize library into each operator. caffe2 used to do this; each library takes a long time to build (1m) and crashes xcode because there's too manycommon hiccupsmodify implementation details, some op is/isn't called anymore ~> error! usually just means some yaml needs regenerating. PyTorch Edge developers are very friendly and can help