There are two available “solutions” here, but I’m not sure that either one is great. Will need to think about this some more and I’m intrigued by the idea of making either all job code available within a project space during runtime, or (maybe even cooler) allowing users to specify extensions at the project level (like organization-specific language-package additions) which get injected into each VM at runtime along with the expression, the language-package, and state. This would solve both the issue of wanting to “import” shared job code and it would allow for project-level one-to-one mapping tables (sometimes called “dictionaries”).
For now: one way would be to collapse multiple jobs into one, define some custom functions up at the top, and then use them throughout—controlling which operations get executed based on the data. This “monolith” style job may be harder to maintain.
The other way would be applicable if the “shared code” were generic enough to qualify as a language-package addition. All the language-packages are open source, and if you find yourself doing the same thing over and over (searching for a record in SF, doing X if found and doing Y if not found, for example) you could write that code as a new named function in language-salesforce. Then, instead of duplicating that code across all the jobs you could call searchAndOperate(uuid_field, uuid_value, codeIfFound, codeIfNotFound) as an operation (just like “create” or “upsert”) in language-salesforce.
I’m terrible at naming things, but I hope you get the idea!