I am seeking advice on formulating a problem to solve in OR tools.
The context is that I am a games owner, and in a shift, I can set varying number of games station. Let's say if I have 2 stations and 5 customers, then assume the most even distribution, which is 3 customers at a station and 2 customers at another station.
With different number of customers at a station, the game speed decreases. To maximise game speed, if I have 5 customers, the best solution is to have 5 stations such that each customer has 1 station each. But this increases the operational cost.
How do I represent the total game speed in this kind of scenario, where it involves even distribution of customers to station, and that the game speed depends on the number of customer at a station?
OR-Tools does not contain a general non-linear solver.
You can access quadratic solvers, although not easily, using the MPSolver API (targeting SCIP or Gurobi).
If your problem can be discretized, you can use CP-SAT to solve your problem. See this section on integer expressions .
Otherwise, OR-Tools is not what you are looking for.
Related
I'm currently running a project to optimize media mixed marketing channel spend. I followed this guide and am using the code here: https://towardsdatascience.com/an-upgraded-marketing-mix-modeling-in-python-5ebb3bddc1b6.
Towards the end of the article, it states:
"We have talked about optimizing spendings and how this was not possible with the old model. Well, with the new one it is, but I will not go into detail here. In short, treat our tuned_model a function and optimize it with a program of your choice, for example, Optuna or scipy.optimize.minimize. Add some budget constraints to the optimization, such as that the sum of spendings should be less than 1,000,000 in total."
I'm not sure how I can do this.
The goal here is to optimize spend (capped at a certain amount) to hit maximum profit.
I am working on a use case, in which I have multiple vehicles as depot,delivery boys and set of customers from diverse location to serve fresh food. Customers would place an order from an application,then delivery boy receives the order and get the food from Van and then deliver it with some promised delivery time(15 mins). I want to optimize this problem so that operational cost for traveling is reduced and delivery time is minimized. Just wanted to know is there any implementation in Python so solve VRPTW problem ? Please help
You can find implementation of Dijkstra shortest path algorithm in python.
An example implementation is
http://code.activestate.com/recipes/577506-dijkstra-shortest-path-algorithm/
read some research papers on vehicle routing problem. i've seen some of the papers provides a complete algorithm on vehicle routing, and they come in different ways by considering multiple criteria. hence, it's possible to implement one or more of the algorithms provided in these papers and do a test to use the optimal solution.
If you want to solve a routing problem, the very first thing to figure out is what variant of the vehicle routing problem you're solving. I'm going to assume the vans are stationary (i.e. you're not trying to optimise the positioning of the vans themselves as well). Firstly the problem is dynamic as it's happening in realtime - i.e. it's a realtime route optimisation problem. If the delivery people are pre-assigned to a single van, then this might be considered a dynamic multi-trip vehicle routing problem (with time windows obviously). Generally speaking though it's a dynamic pickup and delivery vehicle routing problem, as presumably the delivery people can pickup from different vans (so DPDVRPTW). You'd almost certainly need soft timewindows as well, making it a
DPDVRP with soft time windows. Soft time windows are essential because in a realtime setting you generally want to deliver as fast as possible, and so want to minimise how late you are. Normal 'hard' time windows like in the VRPTW don't let you deliver after a certain time, but place no cost penalty on delivering before this time (i.e. they're binary). Therefore you can't use them to minimise lateness.
I'm afraid I don't know of any open source solver in python or any other language that solves the dynamic pickup and delivery vehicle routing problem with soft time windows.
This survey article has a good overview of the subject. We also published a white paper on developing realtime route optimisers, which is probably an easier read than the academic paper. (Disclaimer - I am the author of this white paper).
I'm trying to study customers behavior. Basically, I have information on customer's loyalty points activities data (e.g. how many points they have earned, how many points they have used, how recent they have used/earn points etc). I'm using R to conduct this analysis
I'm just wondering how should I go about segmenting customers based on the above information? I'm trying to apply the RFM concept then use K-means to segment my customers(although I have a few more variables than just R,F,M , as i have recency,frequency and monetary on both points earn and use, as well as other ratios and metrics) . Is this a good way to do this?
Essentially I have two objectives:
1. To segment customers
2. Via segmenting customers, identify customers behavior(e.g.customers who spend all of their points before churning), provided that segmentation is the right method for such task?
Clustering <- kmeans(RFM_Values4,centers = 10)
Please enlighten me, need some guidance on the best methods to tackle such problems.
Your attempts is always less then 5 because there is no variable increment. So your loop is infinite
I'm doing an ongoing survey, every quarter. We get people to sign up (where they give extensive demographic info).
Then we get them to answer six short questions with 5 possible values much worse, worse, same, better, much better.
Of course over time we will not get the same participants,, some will drop out and some new ones will sign up,, so I'm trying to decide how to best build a db and code (hope to use Python, Numpy?) to best allow for ongoing collection and analysis by the various categories defined by the initial demographic data..As of now we have 700 or so participants, so the dataset is not too big.
I.E.;
demographic, UID, North, south, residential. commercial Then answer for 6 questions for Q1
Same for Q2 and so on,, then need able to slice dice and average the values for the quarterly answers by the various demographics to see trends over time.
The averaging, grouping and so forth is modestly complicated by having differing participants each quarter
Any pointers to design patterns for this sort of DB? and analysis? Is this a sparse matrix?
Regarding the survey analysis portion of your question, I would strongly recommend looking at the survey package in R (which includes a number of useful vignettes, including "A survey analysis example"). You can read about it in detail on the webpage "survey analysis in R". In particular, you may want to have a look at the page entitled database-backed survey objects which covers the subject of dealing with very large survey data.
You can integrate this analysis into Python with RPy2 as needed.
This is a Data Warehouse. Small, but a data warehouse.
You have a Star Schema.
You have Facts:
response values are the measures
You have Dimensions:
time period. This has many attributes (year, quarter, month, day, week, etc.) This dimension allows you to accumulate unlimited responses to your survey.
question. This has some attributes. Typically your questions belong to categories or product lines or focus or anything else. You can have lots question "category" columns in this dimension.
participant. Each participant has unique attributes and reference to a Demographic category. Your demographic category can -- very simply -- enumerate your demographic combinations. This dimension allows you to follow respondents or their demographic categories through time.
But Ralph Kimball's The Data Warehouse Toolkit and follow those design patterns. http://www.amazon.com/Data-Warehouse-Toolkit-Complete-Dimensional/dp/0471200247
Buy the book. It's absolutely essential that you fully understand it all before you start down a wrong path.
Also, since you're doing Data Warehousing. Look at all the [Data Warehousing] questions on Stack Overflow. Read every Data Warehousing BLOG you can find.
There's only one relevant design pattern -- the Star Schema. If you understand that, you understand everything.
On the analysis, if your six questions have been posed in a way that would lead you to believe the answers will be correlated, consider conducting a factor analysis on the raw scores first. Often comparing the factors across regions or customer type has more statistical power than comparing across questions alone. Also, the factor scores are more likely to be normally distributed (they are the weighted sum of 6 observations) while the six questions alone would not. This allows you to apply t-tests based on the normal distibution when comparing factor scores.
One watchout, though. If you assign numeric values to answers - 1 = much worse, 2 = worse, etc. you are implying that the distance between much worse and worse is the same as the distance between worse and same. This is generally not true - you might really have to screw up to get a vote of "much worse" while just being a passive screw up might get you a "worse" score. So the assignment of cardinal (numerics) to ordinal (ordering) has a bias of its own.
The unequal number of participants per quarter isn't a problem - there are statistical t-tests that deal with unequal sample sizes.
I'm new to the whole traveling-salesman problem as well as stackoverflow so let me know if I say something that isn't quite right.
Intro:
I'm trying to code a profit/time-optimized multiple-trade algorithm for a game which involves multiple cities (nodes) within multiple countries (areas), where:
The physical time it takes to travel between two connected cities is always the same ;
Cities aren't linearly connected (you can teleport between some cities in the same time);
Some countries (areas) have teleport routes which make the shortest path available through other countries.
The traveler (or trader) has a limit on his coin-purse, the weight of his goods, and the quantity tradeable in a certain trade-route. The trade route can span multiple cities.
Question Parameters:
There already exists a database in memory (python:sqlite) which holds trades based on their source city and their destination city, the shortest-path cities inbetween as an array and amount, and the limiting factor with its % return on total capital (or in the case that none of the factors are limiting, then just the method that gives the highest return on total capital).
I'm trying to find the optimal profit for a certain preset chunk of time (i.e. 30 minutes)
The act of crossing into a new city is actually simultaneous
It usually takes the same defined amount of time to travel across the city map (i.e. 2 minutes)
The act of initiating the first or any new trade takes the same time as crossing one city map (i.e. 2 minutes)
My starting point might not actually have a valid trade ( I would have to travel to the first/nearest/best one )
Pseudo-Solution So Far
Optimization
First, I realize that because I have a limit on the time it takes, and I know how long each hop takes (including -1 for intiating the trade), I can limit the graph to all trades whose hops are under or equal to max_hops=int(max_time/route_time) -1. I cut elements of the trade database that don't fall within this time limit, pruning cities that have shortest-path lengths greater than max_hops.
I make another entry into the trades database that includes the shortest-paths between my current city and the starting cities of all the existing trades that aren't my current city, and give them a return of 0%. I would limit these to where the number of city hops is less than max_hops, and I would also calculate whether the current city to the starting city plus that trades shortest-path-hops would excede max_hops and remove those that exceded this limit.
Then I take the remaining trades that aren't (current_city->starting_city) and add trade routes with return of 0% between all destination and starting cities wheres the hops doesn't excede max_hops
Then I make one last prune for all cities that aren't in the trades database as either a starting city, destination city, or part of the shortest path city arrays.
Graph Search
I am left with a (much) smaller graph of trades feasible within the time limit (i.e. 30 mins).
Because all the nodes that are connected are adjacent, the connections are by default all weighted 1. I divide the %return over the number of hops in the trade then take the inverse and add + 1 (this would mean a weight of 1.01 for a 100% return route). In the case where the return is 0%, I add ... 2?
It should then return the most profitable route...
The Question:
Mostly,
How do I add the ability to take multiple routes when I have left over money or space and keep route finding through path discrete to single trade routes? Due to the nature of the goods being sold at multiple prices and quantities within the city, there would be a lot of overlapping routes.
How do I penalize initiating a new trade route?
Is graph search even useful in this situation?
On A Side Note,
What kinds of prunes/optimizations to the graph should I (or should I not) make?
Is my weighting method correct? I have a feeling it will give me disproportional weights. Should I use the actual return instead of percentage return?
If I am coding in python are libraries such as python-graph suitable for my needs? Or would it save me a lot of overhead (as I understand, graph search algorithms can be computationally intensive) to write a specialized function?
Am I best off using A* search ?
Should I be precalculating shortest-path points in the trade database and maxing optimizations or should I leave it all to the graph-search?
Can you notice anything that I could improve?
If this is a game where you are playing against humans I would assume the total size of the data space is actually quite limited. If so I would be inclined to throw out all the fancy pruning as I doubt it's worth it.
Instead, how about a simple breadth-first search?
Build a list of all cities, mark them unvisited
Take your starting city, mark the travel time as zero
for each city:
if not finished and travel time <> infinity then
attempt to visit all neighbors, only record the time if city is unvisited
mark the city finished
repeat until all cities have been visited
O(): the outer loop executes cities * maximum hops times. The inner loop executes once per city. No memory allocations are needed.
Now, for each city look at what you can buy here and sell there. When figuring the rate of return on a trade remember that growth is exponential, not linear. Twice the profit for a trade that takes twice as long is NOT a good deal! Look up how to calculate the internal rate of return.
If the current city has no trade don't bother with the full analysis, simply look over the neighbors and run the analysis on them instead, adding one to the time for each move.
If you have CPU cycles to spare (and you very well might, anything meant for a human to play will have a pretty small data space) you can run the analysis on every city adding in the time it takes to get to the city.
Edit: Based on your comment you have tons of CPU power available as the game isn't running on your CPU. I stand by my solution: Check everything. I strongly suspect it will take longer to obtain the route and trade info than it will be to calculate the optimal solution.
I think you've defined something that fits into a class of problems called inventory - routing problems. I assume since you have both goods and coin, the traveller is both buying and selling along the chosen route. Let's first assume that EVERYTHING is deterministic - all quantities of goods in demand, supply available, buying and selling prices, etc are known in advance. The stochastic version gets more difficult (obviously).
One objective would be to maximize profits with a constraint on the purse and the goods. If the traveller has to return home its looks like a tour, if not, it looks like a path. Since you haven't required the traveller to visit EVERY node, it is NOT a TSP. That's good - shortest path problems are generally much easier than TSPs to solve.
Because of the side constraints and the limited choice of next steps at each node - I'd consider using dynamic programming first attempt at a solution technique. It will help you enumerate what you buy and sell at each stage and there's a limited number of stages. Also, because you put a time constraint on the decision, that limits the state space of choices.
To those who suggested Djikstra's algorithm - you may be right - the labelling conventions would need to include the time, coin, and goods and corresponding profits. It may be that the assumptions of Djikstra's may not work for this with the added complexity of profit. Haven't thought through that yet.
Here's a link to a similar problem in capital budgeting.
Good luck !