The Lokad app is a webapp provided as SaaS (Software as a Service). The purpose of Lokad is to deliver predictive analytics in order to optimize the supply chain (better stocks, better prices, etc.). The Lokad app is intended as an analytical layer that operates alongside transactional systems (ERP, WMS, CRM, etc.). It comes with a monthly subscription flat fee that typically bundles the app itself with professional services. Those professional services, provided by Lokad’s engineers (Supply Chain Scientists), alleviate almost entirely the need for technical support from the IT department itself for this scope. The one key contribution expected from the IT department is the setup of a data pipeline pushing flat files (by SFTP or FTPS) to Lokad, and potentially reintegrating the results generated.
Last modified: September 21st, 2023
The Lokad app is multitenant. Each tenant (i.e. client account) has its own dedicated file system and its own dedicated codebase repository. The filesystem is accessible through FTPS, SFTP and a web interface. This filesystem is geared toward large flat files (up to 100 GB per file) and features data versioning (like Git). The codebase repository is used to host Envision scripts. Envision is a proprietary DSL (Domain Specific programming Language) developed by Lokad. This DSL is heavily specialized for predictive optimization use cases. Envision scripts are used to perform the core numerical analyses (including machine learning algorithms, solvers, …) and to generate data rich dashboards.
The app is redeployed in full every Tuesday between 10:00 and 14:00 (time of Paris). The downtime is typically kept under 5min. Lokad takes full ownership of all the versioning concerns.
The IT department is not expected to ever acquire any specific competency with Lokad’s stack. However, if you are curious, there is a complete technical documentation.
IT contribution overview
We expect the IT department to set up a data pipeline that pushes a short series of relevant flat file extractions toward Lokad by SFTP or FTPS. The extractions are performed over the transactional systems (ex: ERP). We have a strong preference for raw table extractions (no filter, no join, no transformation), which requires minimal effort. From an ETL perspective, we only require the ‘E’ (extract) part under its simplest form (plain copy). Format-wise, Lokad is compatible with every reasonably tabular flat file.
The data pipeline is expected to run at least on a daily basis, and to be fully automated. The amount of work for the IT department depends on the data extraction scope (which systems? which tables?). However, as a rule of thumb, the data pipeline setup typically requires about 15 to 45 man-days, even for large companies. Once the data pipeline is in place, Lokad typically requires only minimal monitoring from the IT department, which is typically done with 1 or 2 man-days per month.
The app is hosted in Microsoft Azure data centers located in the EU. We do not process any personal data , as we do not need such data to operate. When establishing the data extraction scope, we exclude any column or field that would contain personal data.
For authentication, our preference goes to SAML. We strongly suggest having users access Lokad via a federated identity such as Azure Active Directory, Office 365 or Google Workspace. This eliminates all the password-related problems.
Upon request, security audits and penetration tests can be performed by our clients. Details depend on the negotiated agreements.
For more details, see Security at Lokad.
The Quantitative Supply Chain is more about a journey than an end in itself. Yet, at the same time, the supply chain leadership that engages their company in carrying out a Quantitative Supply Chain initiative requires visibility when it comes to the project timeline. While positive returns can be obtained in a couple of months, it frequently takes up to two years to unlock the full potential of Quantitative Supply Chain. In the following piece, we provide an overview of a typical timeline associated with a Quantitative Supply Chain initiative for a mid-market company. For large companies, timelines should be expected to be twice as long.
Project kickoff: Representatives from both parties introduce themselves to each other and schedule weekly meetings. These weekly meetings will last right until the Production phase is reached. The Supply Chain Scientist presents the different phases of implementation and the various deliverables that can be expected by the client. The rest of the call is dedicated to reviewing various supply chain details and IT characteristics of the integration. After the call, a summary documenting the project’s organizational aspects is produced and sent over to the client.
Data specifications: Shortly after the kickoff meeting, the Supply Chain Scientist produces the data specifications required for the implementation of the project. These specifications are reviewed and validated together with the client. In particular, this document shall define the data to be extracted from the client’s IT systems. As a rule of thumb, the extraction should stay as close as possible to the original data as it exists in the client’s IT systems.
1st Data upload: After validating the specifications, the first set of data is uploaded on Lokad’s servers by the client. Generally, at this stage, the upload is not yet carried out via an automated process as several attempts are usually required to establish a consensus on the fine print of data specification.
Validating the data: The Supply Chain Scientist performs an in-depth investigation of the client’s dataset content. In particular, sanity checks are introduced to monitor the quality of the data according to multiple metrics. The goal is to make sure that 1) the dataset is properly refreshed by the upload process, 2) the dataset correctly reflects the reality of the business and 3) the dataset is coherent and complete enough for supply chain optimization purposes.
In terms of deliverables, during this phase the Supply Chain Scientist provides the client with various dashboards that assess the health of the data. These dashboards can be used by the client even for purposes that go beyond the Quantitative Supply Chain initiative itself - as part of their internal data quality assurance process for example.
Mid-project audit: 6 weeks following the initial kickoff, a meeting is set up to evaluate the project completion status. The objective of this “audit” is to address, as early as possible, the problems that may be experienced during the implementation, especially those that could delay the production phase. At Lokad, this “audit” consists of an exchange between the Supply Chain Scientist and the client, based on a checklist that is communicated to the client in advance by the Supply Chain Scientist, right after the kick-off meeting.
Upload automation: Once both parties validate the overall quality of the dataset that has been uploaded so far, the client implements an automated process that transfers their dataset to Lokad on a regular basis - ideally daily. At the same time, on Lokad’s side, the data health logic - with its multiple checks - is scheduled to be refreshed after every upload.
Setting up the optimization: From this point on, the Supply Chain Scientist has all the necessary ingredients for implementing the optimization of decisions which have been agreed on with the client previously. Therefore, he implements scripts to generate different quantitative outputs: operational purchase suggestions, dispatch suggestions, etc. The figures produced by these scripts can be visualized in dashboard form. At this stage, these dashboards represent only a preliminary version of the final dashboards and need to be revised together with the client.
Feedback & fine-tuning: The client’s requests to make some kind of alteration or “tweak” the different outputs usually lead to some fine-tuning of the scripts written by the Supply Chain Scientist. There are many parameters and methods that can be adopted to adequately align the characteristics of the supply chain being optimized with the optimization logic. When the methodology itself is of strategic importance to the client, this is explicitly discussed between the client and the Supply Chain Scientist.
Production: After several rounds of fine-tuning and revision, the client reaches a stage when he comes to trust the logic implemented by the Supply Chain Scientist. At this point, the client can start using the service in production, that is, he can directly execute the supply chain decisions as originally computed by the software. When the client validates that the solution is production-ready, the Supply Chain Scientist delivers a documentation which ensures the maintainability of the solution.
Support & maintenance: The solution is operational and is used by the client while the Supply Chain Scientist monitors the smooth daily execution of the data pipeline. Calls are regularly organized between the client and the Supply Chain Scientist to verify that the optimization delivers the expected degree of supply chain performance. Moreover, supply chains are not static, thus, business or IT changes, small or big, need to be reviewed: a new warehouse, shift of the market, new process, etc. The Supply Chain Scientist proposes suitable modifications to accommodate these different changes. Checkpoint calls are scheduled with an agreed-upon frequency, typically monthly.
Frequently Asked Questions (FAQ)
1. Release Management
1.1 How do releases work for Lokad?
Lokad handles all releases internally, which helps ensure complete transparency for clients. Any releases that may impact a client are coordinated with them - via the client’s technical teams - well in advance. Generally speaking, Lokad adopts a cautious approach to releases: if a scheduled release would not provide sufficient preparation time for a client, the release would be temporarily postponed.
Lokad’s releases are very granular, and the design usually allows the client to opt out of a particular technical element of an overall release. Thus, if we must postpone the implementation of one element - for which our client is not yet ready - the overall release can still take place (and implement the other non-impacting elements).
1.2 How frequent are the releases?
Lokad releases a new version every Tuesday, typically around 11 AM (CET).
1.3 Do you provide a plan of the upcoming releases?
Yes, see Release Management 1.2.
1.4 Does a version change involve a reinstall or just a patch?
Lokad redeploys, through automated means (scripts), its own solution. We do not patch systems in production. If we have a “security patch” to deploy, we will redeploy the solution through our usual automated means.
1.5 How long does it take to apply a major release?
The automated process takes about 1 hour. There is a phased roll-out (machine by machine), as we intend to keep Lokad’s platform operational and accessible during the release. Operationality during a roll-out is discussed in Release Management 1.7.
1.6 Who is responsible for the correct execution of the release?
The Lokad team takes full ownership of the correct execution of the release.
1.7 Do you have a downtime during the release?
Mostly not, but bear in mind Lokad’s solution is a distributed system dedicated to large-scale computations. As such, the impact of a release differs between the front- and back-end systems. Client-facing subsystems, such as the dashboards, are designed for zero downtime. Back-end systems, such as those in charge of the execution of batch jobs, might be paused for a few minutes (at least for some jobs). However, these batch jobs can be scheduled, thus proactive planning should allow for the completion of batch jobs outside of the release time frame.
1.8 What is your testing process or strategy for a release?
Lokad utilizes automated processes dedicated to testing and ensuring the correctness of an upcoming release. These processes include extensive suites of automated tests (measured in the thousands); unit tests, functional tests, performance tests, etc. We have also engineered dedicated tools that let us reproduce entire sequences of past job executions within the Lokad platform. These tools allow us to check that Envision scripts have the exact same behavior before/after an upcoming release. Further, we can check that the performance profiles of existing scripts remain in line with schedule expectations, as defined by our clients.
1.9 Do you have multiple environments?
Yes, however, the alternate environments (at the platform level, besides the production one) are typically not intended for our clients. In addition to the production environment and the transient development ones, we have an ’evergreen’ environment that matches the last stable version of our codebase. This validates a specific subset of our automated testing processes. Our clients may gain access to our ’evergreen’ environment in order to validate that a specific upcoming release behaves as expected. This situation may arise if there is IT integration between Lokad and the client. In practice, this situation is infrequent.
If the goal is to be able to run (side-by-side) multiple variants of Envision scripts, then one Lokad account can be partitioned into multiple “environments”. If the goal is to be able to perform any kind of testing, then a second Lokad account can be provided for transient testing purposes. This second approach keeps the primary client account (used for production) isolated from these tests.
1.10 How many different versions can co-exist?
Lokad is a multi-tenant SaaS that runs the same unique version for all its clients, however, Lokad has the capacity to operate as many distinct versions as desired by the client.
1.11 Can a client opt out from a release?
As Lokad is a multi-tenant SaaS that runs the same unique version for all clients, it is not possible to opt out of a release. However, from a business perspective, this is moot as any “change” is implemented through the execution of Envision scripts within the Lokad solution.
For situations in which a release may be temporarily postponed, see Release Management 1.1.
1.12 Do you have release notes? Do you provide them to your clients?
Yes. These notes are shared internally (with our supply chain scientist teams). If explicitly agreed as part of a contract, these release notes can be made accessible to a client. In practice, the release notes of the Lokad platform are only of interest to people who work with Envision code.
1.13 What is the process for a client to request an evolution of the solution?
Most of our clients benefit from a “software + expert” offering, where a team of Lokad’s supply chain scientists is responsible for the implementation and maintenance of a client’s supply chain solution. These situations are known as “supply chain as a service”. In these arrangement, the client routinely interacts with one (or more) supply chain scientists. Also, most clients benefit from a weekly or monthly steering committee to discuss the present state of the solution and any desired evolutions. This method is used by Lokad to collect all the evolution requests and propose a roadmap for the implementation of changes.
1.14 Is it possible to administrate the application web-service and configure its parameters?
Yes, in the sense that the Lokad platform is programmatic by nature. Lokad’s “analytical” logic takes the form of Envision scripts - Envision being the DSL (Domain-Specific Language) engineered by Lokad for the purpose of the predictive optimization of supply chain.
Thus, in a sense, every single parameter configuration is available by leveraging Envision scripts within the account.
2.1 Does your SLA (service level agreement) cover a 99.xy% uptime?
Yes. The SLA is part of our default contractual agreement. However, the notion of uptime in the context of a distributed computer system - dedicated to the predictive optimization of supply chains - is complex. Consider the following scenarios: - Lokad is sent client data (a daily step) 2 hours behind schedule. This may very well disrupt the ordinary efficiency of our resource allocation heuristics. This, in turn, may prolong the time needed to perform Lokad’s batch jobs (e.g., 75 minutes instead of the customary 60). Some may consider this a 15-minute downtime, but that is beyond our control.
- Lokad receives the same client data on time, but the data presents a sizeable drop of stock levels. This would trigger an interruption of the batch jobs (Lokad-side) and alert a supply chain scientist to investigate the problem. The supply chain scientist sees that an automated replenishment order is unprecedently large. The supply chain scientist decides that a direct assessment from the client is necessary. The next day, the client confirms that the stock data was corrupted and would have resulted in a large stock write-off. Some may consider this a 24-hour downtime, but that seems practically obtuse in context.
The biggest danger to a supply chain optimization solution is not being a bit late; it is being very wrong. Once a supply chain decision is made, like (incorrectly) triggering a production batch, unmaking it can be exceedingly costly. We encourage our clients not to become arbitrarily attached to indicators in isolation, as this attitude can incentivize people to deliver inferior overall work so long as it appears to satisfy a KPI (such as x.y% uptime).
2.2 Do you guarantee response time for user requests within X seconds?
Yes, under 500ms, but with caveats.
We have designed what roughly amounts to “constant-time dashboards”. Under the hood, a dashboard’s display requires a single request over the network, and in our back end, we collocate all the dashboard data to keep the number of network requests low (usually measured in single digits). This design goes a long way to “guaranteeing” the low latency of the typical user request in the display of a dashboard. This design choice also prevents the dashboard from becoming crowded with tiles - each of which would require network requests - and slowing the overall user experience.
Concerning the duration of batch jobs, through Envision we can provide guarantees - at compile time - that a batch job will complete. We can also guarantee largely reproducible completion times for our batch jobs. These guarantees are obtained through static analysis and careful design of the Envision language - which makes certain classes of static analyses possible in the first place. This approach has limits, but it is vastly superior to designs that offer no guarantees at all.
However, end-to-end latency is not entirely in our hands. For example, we do not control the quality of the internet connection used by our clients. A large spreadsheet from Lokad will take time to download over a low-bandwidth network.
2.3 Do you have system performance audit logs?
Yes. We collect very granular performance logs for all computing resources involved: CPU, memory, storage, bandwidth, etc. These performance logs are used, among other things, to ensure that a new, as-of-yet unreleased version of the platform meets our expectations in terms of performance. We test this by comparing the performance of the new version with the performance of previous ones, as evidenced through these logs.
2.4 Is it possible to monitor slow responses or congestion?
Yes. The Lokad platform comes with an internal scheduler that can track the timely execution of batch jobs. The design of Lokad largely ensures constant-time response for all the requests - except the long running operations, which are treated as batch jobs.
As Lokad is a multi-tenant platform, a large portion of the performance monitoring is not directly accessible to our clients (as it covers the platform’s performance as a whole). As can be expected, Lokad teams continuously monitor the performance of our platform.
2.5 Do you have load balancers?
Yes. Lokad’s load balancers are primarily intended for reliability rather than performance purposes. Network-level load balancing is done through the networking layer of the cloud computing platform that we use (Microsoft Azure). However, the distribution of the internal data processing workload, as handled by the Lokad platform, is not managed through load balancers, but through an in-house orchestrator associated with our compiler stack.
2.6 Do you pool resources like DB connections, sessions, etc.?
Yes. However, the Lokad platform does not rely on a transactional database to operate. Thus, there are no DB connections to pool. Nevertheless, we do pool many other resources, whenever it makes sense from a performance perspective.
2.7 Do you support parallel processing?
Yes. Envision (our DSL) is designed around the notion of automatic parallelized execution. The Lokad platform actively leverages parallelization at multiple levels: at the CPU core level through SIMD (Single Instruction/Multiple Data) operations; at the CPU level through multi-threaded executions; and at the cluster level through distributed computing. As parallel processing is a core design aspect of Envision, the quasi-totality of the workloads executed on the Lokad platform benefit from extensive parallelization, by default, without any specific effort for our clients or our supply chain scientists.
2.8 Do you support caching any frequently accessed data?
Yes. However, caching is frequently introduced as a workaround to cope with the performance limitations of transactional databases. Given the Lokad platform does not include transactional databases, we do not use caching for this purpose.
2.9 Do you compress data to reduce network traffic?
Yes, we compress most of the network traffic. However, we cannot compress all of it as that would present a security vulnerability known as BREACH (Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext). BREACH happens when a combination of 3 factors is present.
A response is compressed.
A response contains a secret.
A response contains a string that can be collected by the attacker crafting the request.
In order to defend against BREACH, Lokad disables compression on all responses where the third condition is true. We also compress data for reasons beyond reducing the network traffic; firstly, to reduce data storage costs; and secondly, to reduce computation delays.
2.10 Do you do performance testing?
Yes. Lokad has an extensive automated performance-oriented instrumentation layer. This series of tools allows us to assess, before each release, the performance delta of the upcoming version compared to the one obtained with the version currently deployed. This tool allows us to reproduce the same workloads, as observed in production, and monitor the resulting performance; not just in wall-clock time, but considering all the relevant computing resources (memory, bandwidth, I/O, CPU, etc.).
2.11 Do you monitor performance at the transaction level?
Yes. However, as the Lokad platform does not utilize a transactional database, there are no “transactions” to be monitored (in the traditional sense). The closest equivalent is the execution of Envision scripts. For these scripts, we monitor the performance at a very granular level, which is roughly analogous to monitoring the fine print of the query plan execution (from a transactional database perspective).
2.12 What is the performance impact of having concurrent users on the solution?
Almost none. Lokad’s design ensures that dashboards can be serviced in constant time even for a large number of users (by B2B standards). This approach is in stark contrast to many alternative architectures, most notably transactional databases and Business Intelligence cubes.
However, given any individual user could (if in possession of appropriate system rights) trigger arbitrarily large workloads, the number of concurrent users is, at best, a secondary concern when it comes to the solution’s performance. As far as supply chain predictive optimization is concerned, the batch jobs used to perform the analytics of interest represent more than 99% of the workload. The less than 1% remainder is dedicated to servicing user requests.
2.13 Is the system designed to scale vertically and horizontally?
Yes. From our perspective, vertical and horizontal scaling are complementary, and the design of the Lokad platform leverages both. The internal orchestrator - the one in charge of the parallelization - typically starts with vertical scaling, as vertical scaling largely facilitates data colocation. Then, the orchestrator leverages horizontal scaling if the workload is large enough to benefit from multi-machine execution.
2.14 Do you auto-scale compute and storage as needed?
Yes. The Lokad platform is multi-tenant. Through multi-tenancy, we perform large-scale low-latency allocations of compute resources. This means that the compute auto-scaling Lokad provides is orders of magnitude faster than spinning large VMs (virtual machines) from a cloud computing provider. The storage auto-scaling is largely performed by leveraging the auto-scaling properties of the persistent key-value store, as provided by the underlying cloud computing platform (Microsoft Azure).
2.15 How does your application manage “Big Data” requirements?
The Lokad platform has been specifically designed for “Big Data” processing. As of January 2023, the whole Lokad platform manages about 1 petabyte of data across our entire customer base. Our platform can process individual files up to 100GB, and we routinely process tables with tens of billions of lines. Go to Security of Lokad 4.10
This point is particularly technical and goes beyond the scope of this document. For an extensive explanation, we recommend Victor Nicollet’s 4-part series on the design of the Envision Virtual Machine.
2.16 Can Lokad’s cloud-based solution be configured in light of tight bandwidth and latency constraints (client-side)? Such as: bandwidth = 3Mbps (download) / 1Mbps (upload), latency = 600-800ms
2.17 What is the solution’s average and peak data throughput capabilities compared to a benchmark of 1 (low-end) and 5 (high-end) messages per second?
The Lokad platform does not operate with messages. Similarly, interactions with the Lokad platform are not performed through messages. However, throughput does matter in order to swiftly process vast datasets of transactional data. The Lokad platform manages, in aggregate, over 1 petabyte of data. We routinely handle over 1 terabyte per minute for large calculation batches.
A very high throughput is important to avoid operational delay when processing very large datasets (tens of terabytes) with complex calculations featuring machine learning and mathematical optimization steps.
2.18 What size of messages can the solution handle? Please provide details regarding minimum, maximum, and average values.
The Lokad platform does not operate with messages. The closest thing would be “flat files”. These flat files can be sent to - and retrieved from - Lokad. The Lokad platform can process files that are individually as large as 100GB. However, this is not a recommended practice as it is usually a bit unwieldy (not for Lokad, rather for the clients who will have to familiarize themselves with external tools to produce and consume the large files).
3.1 What is the process for a client to report an incident?
Most of our clients benefit from a package that includes access to our team of supply chain scientists. These supply chain scientists are the first point of contact for reporting incidents. In the event of an incident, we suggest clients either telephone their supply chain scientist directly - if the problem is urgent - or send an email. The supply chain scientists deal with incident management, including any escalations within Lokad’s organization.
3.2 Do you offer a ticketing system?
Yes. Lokad leverages a third-party ticketing system. However, as of January 2023, we have started developing an internal solution that offers a tight integration with the rest of our platform.
3.3 Do you support reporting incidents to third-party tools?
Yes, under the provision of a dedicated contractual agreement.
In the context of predictive supply chain optimization, “incidents” can vary and be difficult to define. Generally, our clients do not think of minor platform-level events (such as minute downtimes) as “incidents”. Rather, actual supply chain oddities - that may or may not reflect issues with Envision scripts implemented in the client account - would be better candidates. External IT departments are rarely involved in the resolution of these incidents.
3.4 How do you ensure high availability?
Lokad became an early adopter (c. 2010) of cloud computing platforms precisely to ensure higher availability. Besides making the infrastructure redundant (see Incidents 3.5), the design of Lokad’s solution strongly emphasizes simplicity. In contrast to comparable enterprise software solutions, we have significantly fewer complex dependencies (by almost an order of magnitude). An enormous layer of complexity absent from our solution is a transactional database. By having fewer moving parts, we have far fewer failure modes, which helps us maintain high availability for our clients (who are distributed across several time zones).
3.5 Do you have a redundant infrastructure (if yes, how)? Do you avoid single points of failure?
Yes. Our critical systems are redundant, precisely to avoid single points of failure. Redundant systems include the subsystems that support stateful protocols like SFTP. Furthermore, data storage is not only redundant but geographically redundant, too. This redundancy is leveraged during our releases to preserve the uptime while the roll-out is in progress.
3.6 How do you recover from hardware failures?
Recovery from most hardware failures is transparently performed by the cloud computing platform that Lokad uses. The built-in redundancy of Lokad’s platform ensures that transient hardware failures do not impact uptime while the cloud computing platform re-allocates a new non-defective resource. For persistent data storage, Lokad only leverages services (i.e., key-value stores) that are themselves protected against hardware failures through their own layers of redundancy.
3.7 Do you have a backup?
Yes. We have a backup environment dedicated to severe disaster recovery scenarios (beyond a data center-level downtime). This backup environment is completely isolated from the production environment. The backup environment can read from the production one (but not write), while the production environment can neither read nor write from the backup one.
3.8 Do you have a disaster recovery plan?
Yes. At the technical level, the disaster recovery plan covers the complete re-instantiation of the Lokad platform. This part extensively leverages the automated processes that we have in place for our weekly releases. At the business level, the disaster recovery plan includes contacting every client we have, typically following a process that has been agreed upon with each one. For most of our clients, the appointed supply chain scientists act as the primary points of contact for the duration of the recovery.
3.9 Does the solution support Point-in-time recovery (PITR) across database and data outside of database? What is the Recovery Point Objective (RPO) of the solution?
Lokad’s solution features a continuous data protection design through the live incremental backup of both its event store and its content source. Therefore, we can do PITR for any given moment (down to the minute-level).
We have RPO at 1 minute for data center-level disasters - if the data are not compromised. We achieve this by leveraging geographically redundant writes of our persistent key-value store. If the key-value store is compromised, Lokad recovers from its backup storage (kept as isolated as possible from our production systems), also hosted in a different geographical location. In this case, we have RPO at 12 hours.
3.10 Is the solution able to generate integrity violation alerts? Does the solution have the capability to add or extend integrity checks as per requirements?
Yes, though this type of problem primarily reflects a software design based on a transactional database. Lokad’s platform does not operate with a transactional database but with an event store, adopting an event sourcing design rather than a relational one. This does not remove the need to enforce data integrity, but those concerns are addressed in alternative ways.
When it comes to the processing of client data, Envision (Lokad’s DSL) has extensive capabilities geared towards checking its quality. Through Envision, it is possible to check integrity and generate alerts. This logic can be extended or amended in any way deemed appropriate by the client.
3.11 Are backups periodically tested for proper data-restore functionality?
Yes. The event-sourcing design of Lokad, combined with its content-addressable store, makes testing for backup and data-restore functionality much more straightforward than it is for most mainstream designs that leverage relational databases (such as SQL).
See also Incidents 3.7.
3.12 Are the disaster recovery plans periodically tested for proper disaster-recovery functionality?
Yes. Lokad’s deployment strategy leverages end-to-end automated scripts, deliberately keeping very few human interventions - one notable exception being the ability to trigger a full production redeployment. This heavily automated approach facilitates the testing of disaster-recovery functionality.
See Incidents 3.8.
3.13 Can recovery be performed for individual customers and/or customer environments?
Yes. Our internal tooling supports the possibility of restoring a selected customer’s account (including restoring the account/environment to a given point in time). However, considering the Lokad platform itself features extensive versioning capabilities (e.g., Envision scripts are versioned, and previous versions are accessible from within the app), this capability is rarely used.
3.14 Do the backup and disaster recovery plans fulfill customers’ RTO (Recovery Time Objective), RPO (Recovery Point Objective), and disaster scenario requirements (as defined by and agreed with respective clients)?
Yes. The Recovery Time Objective (RTO) would refer here to the amount of time Lokad’s platform could be theoretically down without causing significant damage to the customer, as well as the time spent restoring the platform and its data so it can resume normal operations after a significant incident.
The RTO depends very much on the fine print of the specific processes supported/provided by Lokad. For example, a customer relying on Lokad to schedule monthly overseas purchasing orders may have an RTO of 1 week. Conversely, a customer that relies on Lokad to optimize its daily inventory dispatch from a warehouse to multiple stores may have an RTO of 1 hour.
In practice, various technical contingencies can be put in place to substantially improve the RTO (i.e., decrease a theoretical downtime). For example, failover decisions can be computed ahead of time. These decisions can be used in case the Lokad platform is not available. Compared to “regular” optimized decisions, failover decisions may exhibit a slightly degraded supply performance given they (by definition) would not leverage the absolute, most recent data.
For our managed accounts, it is the responsibility of the Supply Chain Scientist at Lokad to jointly craft a process - with the client’s operational teams - that provides a high RTO, but also one that ensures minimal business impact in case of actual incident. From our perspective, this challenge is first and foremost a supply chain problem rather than an IT one.
See also Incidents 3.9.
4.1 Which programming languages do you use?
The Lokad platform is developed in C#, F# and TypeScript. The platform also features Envision (Lokad’s DSL), which is used to implement the client-specific supply chain solutions.
4.2 Which development suite do you use?
Lokad’s software engineering teams use Microsoft Visual Studio. The supply chain scientist teams use the Lokad platform itself, which features its own development suite.
4.3 What operating system do you support?
Lokad is a web-based platform and we support all operating systems that have access to a modern web browser (ex: Firefox). Internally, Lokad’s platform is compatible with both Linux and Microsoft Windows, although all our production systems are deployed under Linux (Ubuntu).
4.4 What database system do you use or support?
Lokad supports all database systems that can produce flat file exports. This includes practically all the market’s database systems, including older models. Internally, Lokad does not use a database system, but a key-value store. At the time of writing (January 2023), we use Blob Storage as provided by Microsoft Azure.
4.5 What caching system do you use?
Lokad engineered its own caching subsystems in C#/.NET. These subsystems are tightly integrated with the rest of the solution and do not reflect the traditional caching systems chiefly intended to mitigate performance issues with a relational database (which Lokad does not have).
4.6 How does the solution handle certificates?
Lokad leverages Let’s Encrypt through automated certificate renewals.
4.7 What are the technical prerequisites to use the solution?
The main technical prerequisite is a transaction system that keeps track of one’s inventory. Additionally, it is helpful if the client has some experience extracting data (as flat files) from their transactional systems, but this is certainly not a prerequisite.
4.8 List any additional third-party licenses required to operate Lokad’s solution (e.g., OS, SQL,…).
N/A. Lokad does not require any 3rd party licence(s) to operate. A wide array of open-source tools exist to support the integration of Lokad (i.e., flat files transferred through FTPS or SFTP); hence, there is no 3rd party licence required, even indirectly, to benefit from the Lokad platform.
4.9 Does the service require any browser plug-ins or special software?
Lokad is a webapp. It does not require browser plug-ins or any special software.
4.10 What are the inbound and outbound interfaces of the application?
Lokad’s solution offers a web interface - accessible through HTTPS - and file protocols, namely SFTP and FTPS.
4.11 How do you ensure there are no data leaks between tenants?
Lokad’s solution segregates tenant data through its very design, which ensures that data leaks (even accidental ones) do not occur. Furthermore, all code shipped to production is peer-reviewed, thus providing an additional layer of protection. Finally, we direct security researchers (people performing pentests) to specifically investigate the possibility of data leaks. We give them access to multiple Lokad accounts - in a dedicated and fully isolated environment that mirrors the production one - so they can aggressively check this property of our platform.
4.12 Can the solution be containerized?
Yes, but there is little to no benefit for Lokad’s platform to be containerized. Containerization brings value when there are complex dependencies (e.g., a transactional database, an isolated caching system, etc.). We do not use containers in production or development, which improves our security by eliminating entire classes of vulnerabilities. Instead, we keep the solution simple enough so that deployment can be performed with even small shell scripts.
4.13 Can the GUI components be decoupled from the backend?
Yes, GUI components (in this case, web interfaces) are stand-alone. This design helps to achieve higher availability. End-users can access their Lokad account dashboards even if a sudden downtime affects one of the other subsystems.
4.14 Does the Lokad application support localization functions (such as changing language)?
Yes, the application supports localization functions. All the user interfaces produced by Lokad’s platform can be localized in any language. In particular, the whole technological stack adopts UTF-8, in order to accommodate all the charsets that exist beyond the Latin one. In particular, any user interface produced by Lokad’s platform can be re-localized - after delivery - into another language.
4.15 Is it possible for end-users to update or create new translations after delivery of post-processed data from Lokad?
Yes, see 4.14 above.
4.16 Does your system have a built-in Help? If yes, in which language(s)?
Yes, Lokad comes with very extensive public documentation (in English). However, using the Lokad platform entails the creation of bespoke user interfaces and our regular process involves at least two forms of documentation or help.
First, the dashboards crafted within the Lokad solution are intended to be contextually documented - right from the dashboard itself. In particular, we even have a Markdown tile dedicated to rich text documentation. Second, our deliverables include a “Joint Procedure Manual”. Overall, the manual provides detailed analysis of not only the mechanics of the solution, but also why each element was selected (and how it satisfies the client’s specific supply chain needs).
4.17 Is the webapp responsive?
The Lokad webapp, along with its supporting materials (like the technical documentation), has been designed to be responsive. However, some advanced capabilities, like editing code, are impractical for mobile and/or tablet devices. Thus, the design is intended to maximize responsiveness for the anticipated activities carried out on PCs and smaller devices, respectively.
4.18 If your system is a webapp, which browser and versions do you support? What is your minimum internet browser standard?
Lokad is a webapp and we support the major “evergreen” web browsers such as Chrome, Firefox and Safari. We do not attempt to support older browsers for security reasons, as supporting those browsers implicitly endangers our client(s). Simply put, a vulnerable browser can be leveraged by a malicious actor to exfiltrate data. That being said, we are also quite conservative when it comes to new browser capabilities. As a rule of thumb, we avoid supporting any browser capabilities that have not been adopted by all the major web browsers for at least 1 year.
4.19 For mobile and tablet applications, which OS (and versions) does Lokad support?
N/A. As Lokad is a web app served as SaaS, our clients are not concerned with the OS support. Internally, Lokad is developed under Windows, while all our production cloud-hosted environment is under Linux. Thus, we are quite confident in the broad portability of the Lokad solution. Although we do not feel any present need to change this setup, should valid evidence present itself we will adapt accordingly.
4.20 Can the Lokad webapp provide notifications for end-users? If yes, which technology/protocol is leveraged?
Yes, Lokad has the capability to send notifications through programmable HTTP hook notifications. These hooks can be leveraged to use a 3rd party system, frequently already in place in the client company, to send an email notification or any alternative type of notifications deemed appropriate. The hooks are also typically used to signal the availability of data to be retrieved from the Lokad platform through SFTP or FTPS.
4.21 Are there shared elements on the solution that are common to all customers (such as monitoring functions, backup- or patch management solutions, etc.)?
Yes, as Lokad is a multi-tenant app, the infrastructure-level capabilities are all shared between/across tenants, i.e., client accounts. This includes monitoring for uptime, performance and for security reasons, backup, patch management, upgrade, etc.
4.22 Does the solution allow for multiple-destination messaging functionality (i.e., the ability to send a message to more than one recipient or application)?
The Lokad platform does not operate with messages. However, we do provide HTTP-hook capabilities which can be used to generate arbitrarily complex message notifications, typically through low-cost third-party systems. Those notifications are sometimes used by supply chain teams to monitor the timely completion of mission-critical computation batches with the Lokad platform.
4.23 Do all the critical systems and components use the same time source (NTP) or synchronize their clocks in some other reliable way?
Yes. Lokad uses the default NTP service that comes with Ubuntu -
ntp.ubuntu.com. More specifically,
ntpd is run as the time-sync service, which synchronizes against an external NTP time source that is accessed over the network.