I am working on a VRP project using Google OR-Tools with Python.
Currently, I have a tight time windows constraints, high demands, and the capacity of the vehicles.
When I run the solver, the solver always chooses to deploy the vehicle that has the biggest capacity.
Can I make the solver deploy the smaller vehicle even though it will deploy more vehicles? Because in reality deploying bigger vehicles will cost higher.
And is there any function inside that can allow re-departure of vehicle?
Thank you!
1) You can set a fixed cost for each vehicle.
ref: https://github.com/google/or-tools/blob/5ff76b487a6c2006326765d6417964599eedc8c9/ortools/constraint_solver/routing.h#L844-L848
2) to "redeploy" you can duplicate the depot and use "reload".
see: https://github.com/google/or-tools/blob/master/ortools/constraint_solver/samples/cvrp_reload.py
I found another strategy that we can do in the or-tools.
In my case, the or-tools will deploy the big vehicle first as the first solution and there is a minimum number of the biggest vehicle for some 'tight' problems (tight time windows or really high weight demands). So what if, we do not have enough resources which are the big vehicles?
As long as the resources still have a lot of smaller vehicles, I can use 'dummy vehicles' to reach the first solution. The cost for these 'dummy vehicle' should be really high compared to the real vehicle, I set it like 1000 times higher.
After the solver reaches the first solution, you can give it a time to improve the solution. After a while, the solver will deploy the real smaller vehicle and will not use the 'dummy vehicle' anymore.
Related
I need to implement a "frequently bought together" recommendation engine on my website. I did my research and figured out that FP growth would be the most appropriate algorithm for such an application. However, I am not able to find any solution/library available on the internet to implement it on my transactions that can work for millions of records.
The pyfpgrowth algorithm is taking forever and pyspark seems to not yield results as soon as I increase the size to over 500.
Please help me with a solution.
I'm currently running a project to optimize media mixed marketing channel spend. I followed this guide and am using the code here: https://towardsdatascience.com/an-upgraded-marketing-mix-modeling-in-python-5ebb3bddc1b6.
Towards the end of the article, it states:
"We have talked about optimizing spendings and how this was not possible with the old model. Well, with the new one it is, but I will not go into detail here. In short, treat our tuned_model a function and optimize it with a program of your choice, for example, Optuna or scipy.optimize.minimize. Add some budget constraints to the optimization, such as that the sum of spendings should be less than 1,000,000 in total."
I'm not sure how I can do this.
The goal here is to optimize spend (capped at a certain amount) to hit maximum profit.
This area is still very new to me, so forgive me if I am asking dumb questions. I'm utilizing MCTS to run a model-based reinforcement learning task. Basically I have an agent foraging in a discrete environment, where the agent can see out some number of spaces around it (I'm assuming perfect knowledge of its observation space for simplicity, so the observation is the same as the state). The agent has an internal transition model of the world represented by an MLP (I'm using tf.keras). Basically, for each step in the tree, I use the model to predict the next state given the action, and I let the agent calculate how much reward it would receive based on the predicted change in state. From there it's the familiar MCTS algorithm, with selection, expansion, rollout, and backprop.
Essentially, the problem is that this all runs prohibitively slowly. From profiling my code, I notice that a lot of time is spent doing the rollout, likely I imagine because the NN needs to be consulted many times and takes some nontrivial amount of time for each prediction. Of course, I can probably stand to clean up my code to make it run faster (e.g. better vectorization), but I was wondering:
Are there ways to speed up/work around the traditional random walk done for rollout in MCTS?
Are there generally other ways to speed up MCTS? Does it just not mix well with using an NN in terms of runtime?
Thanks!
I am working on a similar problem and so far the following have helped me:
Make sure you are running tensorflow on you GPU (You will have to install CUDA)
Estimate how many steps into the future your agent needs to calculate to still get good results
(The one I am currently working on) parallelize
I was wondering if it was a good idea to use Gekko to solve a lap time optimization:
finding the optimal path on a track to minimize total time by controlling the steering angle and the power output.
I'm fairly new to optimal control problem so if you had pointers on how to start that would be great.
Thanks
The bicycle optimal trajectory problem is possible in Gekko. I recommend that you start by working out simple benchmark problems and then take a staged approach (1D to 3D) to building your application. Also, if the authors are willing to share their model, it is often easier to replicate and extend their results. Here are some links to help you get started or see what is possible with a complex trajectory optimization problem (HALE aircraft).
Example problems
Introductory Optimal Control Benchmark Problems with the minimized final time.
Energy optimization for HALE aircraft trajectory optimization (source code).
Inverted Pendulum
There is also the machine learning and dynamic optimization course that is freely available online if you need additional help getting started.
I am working on a use case, in which I have multiple vehicles as depot,delivery boys and set of customers from diverse location to serve fresh food. Customers would place an order from an application,then delivery boy receives the order and get the food from Van and then deliver it with some promised delivery time(15 mins). I want to optimize this problem so that operational cost for traveling is reduced and delivery time is minimized. Just wanted to know is there any implementation in Python so solve VRPTW problem ? Please help
You can find implementation of Dijkstra shortest path algorithm in python.
An example implementation is
http://code.activestate.com/recipes/577506-dijkstra-shortest-path-algorithm/
read some research papers on vehicle routing problem. i've seen some of the papers provides a complete algorithm on vehicle routing, and they come in different ways by considering multiple criteria. hence, it's possible to implement one or more of the algorithms provided in these papers and do a test to use the optimal solution.
If you want to solve a routing problem, the very first thing to figure out is what variant of the vehicle routing problem you're solving. I'm going to assume the vans are stationary (i.e. you're not trying to optimise the positioning of the vans themselves as well). Firstly the problem is dynamic as it's happening in realtime - i.e. it's a realtime route optimisation problem. If the delivery people are pre-assigned to a single van, then this might be considered a dynamic multi-trip vehicle routing problem (with time windows obviously). Generally speaking though it's a dynamic pickup and delivery vehicle routing problem, as presumably the delivery people can pickup from different vans (so DPDVRPTW). You'd almost certainly need soft timewindows as well, making it a
DPDVRP with soft time windows. Soft time windows are essential because in a realtime setting you generally want to deliver as fast as possible, and so want to minimise how late you are. Normal 'hard' time windows like in the VRPTW don't let you deliver after a certain time, but place no cost penalty on delivering before this time (i.e. they're binary). Therefore you can't use them to minimise lateness.
I'm afraid I don't know of any open source solver in python or any other language that solves the dynamic pickup and delivery vehicle routing problem with soft time windows.
This survey article has a good overview of the subject. We also published a white paper on developing realtime route optimisers, which is probably an easier read than the academic paper. (Disclaimer - I am the author of this white paper).