Python | Calculate Volume with Objects of Set Size Internally - python

I'm trying to figure out how to calculate how many boxes can fit into a shipping container for my work. Does anyone know how I can do this in with Python? There's an example here:
http://www.searates.com/reference/stuffing/
Basically, user enters some different dimensions for boxes, quantities and weights. The program then would return how full the container is or how many more of each box could be added to it. It would also calculate weights of the entire unit.
Even more ideally, would be a picture describing how it's packed.

Related

Add counter to Folium map of markers that meet a specific condition

I work for a small ISP and have written a script to collect data from the switches that is then fed into Folium to produce a map of subscribers and their operating status as online of offline.
I need to add a counter, maybe through a div, that would allow for markers of any status to be counted and displayed in the window like the top left corner.
The cluster option in Folium has helped to give an idea for areas but i would like an overall total as well that is easily seen without the need for adding cluster totals.
Does Folium have something like this that i've just missed or is there another python based dashboard that i could create the number with and add it as a div somewhere.
I've looked at some python based dashboards but most of those are heavily graph based. I really only need a square box with a number that changes based on the total number of markers/subscribers that are offline.
Here is a description of how to add text or images to folium maps:
https://stackoverflow.com/a/65105474/13843906
text will be moving with the map, but image seems to be at a fixed position in map window

Is it possible to calculate real-time distance of an object in a image w/o reference objects?

I have a picture of human eye taken roughly 10cm away using a mobile phone(no specifications regarding the camera). After some detection and contouring, I got 113px as the Euclidean distance between the center of the detected iris and the outermost edge of iris on the taken image. Dimensions of the image: 483x578px.
I tried converting the pixels into mm by simply multiplying the number of pixels with the size of a pixel in mm since 1px is roughly equal to 0.264mm which gives the proper length only if the image is in 1:1 ratio wrt to the real-time eye which is not the case here.
Edit:
Device used: One Plus 7T
View of range = 117 degrees
Aperture = f/2.2
Distance photo was taken = 10 cm (approx)
Question:
Is there an optimal way to find the real time radius of this particular eye with the amount of information I have gathered through processing thus far and by not including a reference object within the image?
P.S. The actual HVID of the volunteer's iris is 12.40mm taken using Sirus(A hi-end device to calculate iris radius and I'm trying to simulate the same actions using Python and OpenCV)
After months I was able to come up with the result after ton of research and lots of trials and errors. This is not the most ideal answer but it gave me expected results with decent precision.
Simply, In order to measure object size/distance from the image we need multiple parameters. In my case, I was trying to measure the diameter of iris from a smart phone camera.
To make that possible we need to know the following details prior to the calculation
1. The Size of the physical sensor (height and width) (usually in mm)
(camera inside the smart phone whose details can be obtained from websites on the internet but you need to know the exact brand and version of the smart phone used)
Note: You cannot use random values for these, otherwise you will get inaccurate results. Every step/constraint must be considered carefully.
2. The Size of the image taken (pixels).
Note: Size of the image can be easily obtained used img.shape but make sure the image is not cropped. This method relies on the total width/height of the original smartphone image so any modifications/inconsistencies would result in inaccurate results.
3. Focal Length of the Physical Sensor (mm)
Note: Info regarding focal length of the sensor used can be acquired from the internet and random values should not be given. Make sure you are taking images with auto focus feature disabled so the focal length is preserved. Incase if you have auto focus on then the focal length will be constantly changing and the results will be all over the place.
4. Distance at which the image is taken (Very Important)
Note: As "Christoph Rackwitz" told in the comment section. The distance from which the image is taken must be known and should not be arbitrary. Head cannoning a number as input will always result in inaccuracy for sure. Make sure you properly measure the distance from sensor to the object using some sort of measuring tool. There are some depth detection algorithms out there in the internet but they are not accurate in most cases and need to calibrated after every single try. That is indeed an option if you dont have any setup to take consistent photos but inaccuracies are inevitable especially in objects like iris which requires medical precision.
Once you have gathered all these "proper" information the rest is to dump these into a very simple equation which is a derivative of the "Similar Traingles".
Object height/width on sensor (mm) = Sensor height/width (mm) × Object height/width (pixels) / Sensor height/width (pixels)
Real Object height (in units) = Distance to Object (in units) × Object height on sensor (mm) / Focal Length (mm)
In the first equation, you must decide from which axis you want to measure. For instance, if the image is taken in portrait and you are measuring the width of the object on the image, then input the width of the image in pixels and width of the sensor in mm
Sensor height/width in pixels is nothing but the size of the "image"
Also you must acquire the object size in pixels by any means.
If you are taking image in landscape, make sure you are passing the correct width and height.
Equation 2 is pretty simple as well.
Things to consider:
No magnification (Digital magnification can destroy any depth info)
No Autofocus (Already Explained)
No cropping/editing image size/resizing (Already Explained)
No image skewing.(Rotating the image can make the image unfit)
Do not substitute random values for any of these inputs (Golden Advice)
Do not tilt the camera while taking images (Tilting the camera can distort the image so the object height/width will be altered)
Make sure the object and the camera is exactly in the same line
Don't use EXIF data of the image (EXIF data contains depth information which is absolute garbage since they are not accurate at all. DO NOT CONSIDER THEM)
Things I'm unsure of till now:
Lens distortion / Manufacturing defects
Effects of field of view
Perspective Foreshortening due to camera tilt
Depth field cameras
DISCLAIMER: There are multiple ways to solve this issue but I chose to use this method and I highly recommend you guys to explore more and see what you can come up with. You can basically extend this idea to measure pretty much any object using a smartphone (given the images that a normal smart phone can take)
(Please don't try to measure the size of an amoeba with this. Simply won't work but you can indeed take some of the advice I have gave for your advantage)
If you have cool ideas and issues with my answers. Please feel free to let me know I would love to have discussions. Feel free to correct me if I have made any mistakes and misunderstood any of these concepts.
Final Note:
No matter how hard you try, you cannot make something like a smartphone to work and behave like a camera sensor which is specifically designed to take images for measuring purposes. Smart phone can never beat those but sure we can manipulate the smart phone camera to achieve similar results upto a certain degree. So you guys must keep this in mind and I learnt it the hard way

Interactive Scatter plots based on user input

I need to create a specific scatterplot using 3 data columns, x,y,z, (height, weight, ID number) as the basic inputs.
I have height, weight, and a unique identifier for each of ~2000 individuals in the set. I want the user to be able to highlight “their location” within the scatterplot of all the data point precisely. To do that out of 2000 datapoints, they’ll need to input their unique ID into a text box, executing, and altering the graph:
a)”accent” the unique input data point (e.g., change the individuals specific point to red while other data points remain gray)
b) as a sort of “tool tip”: Provide the exact values of height, weights and IDnumber in a readable “box” somewhere in or near the graph area, preferably in some open area of the graph’s “state space” This is mainly to let them note their recorded values in our dataset. (Yes, they presumably know their own height and weight… but imagine they’d like to check on whether we have misenterwd their values in the dataset)
I figure there’s an interactive graph package that allows this filter-by typed-input-value-z option, but have only seen filter options that are a small number predefined categories. For example what I have seen permits a drop down box to filter data based on Z. The problem is that my drop down for ID would have as many values as data points and that’s 1000s… so unwieldy compared to my text box idea.
I would like to do this in R or a package that can easily (Im a Stats user mainly, so my do not have much programming skills are limited to writing basic batch programs, .do files, with canned procedures). A non R package that will easily let me create, edit and slap this in a webpage would certainly do.

Python browser game bot - How to best gather information from numbers that are displayed slightly different every time?

I'm learning Python and the first challenge I set myself is to program a bot for a browser game. My script requires the input of numbers that are displayed at certain positions during the game. I do this by taking a screenshot of the displayed number and comparing it to a screenshot of the same number at the same position I saved earlier on my computer. If the screenshot and the comparison image are the same, the function returns True.
This is the code I use for the comparison of screenshot and the comparison image:
from PIL import Image, ImageChops
def equal(img1,img2):
return ImageChops.difference(img1,img2).getbbox()is None
screenshot = Image.open(path)
comparisonimage= Image.open(path)
if equal(screenshot, comparisonimage) == True:
return (20)
The approach above works very well for 95% of the numbers I need as input for my script. Unfortunately, there are apparently some positions where the numbers are displayed with very subtle differences every time they appear. The differences aren't noticeable with the naked eye and I can't find them even if I zoom in to see the individual pixels and compare them with each other, yet the function returns False every time I take a screenshot of the number and compare it to its comparison image.
One of my ideas to solve this issue was to compare the file size of the screenshots with the comparison image's file size, but this method is not reliable because the file sizes vary too much (another indication that there are indeed tiny differences every time the number is displayed). I have considered OCR, Mean Squared Error and Structural Similarity Measure, but all of these methods are rather complex and would probably slow down the script tremendously.
I would really like to find a simpler solution if possible. Please share any ideas with me you might have. Here are two screenshots of the same number at the same position at different times. Screenshot1 Screenshot2
Thank you for your help!

Package to vizualize clustering in python or Java?

I am doing an agent based modeling and currently have this set up in Python, but I can switch over to Java if necessary.
I have a dataset on Twitter (11 million nodes and 85 million directed edges), and I have set up a dictionary/hashmap so that the key is a specific user A and its value is a list of all the followers (people that follow user A). The "nodes" are actually just the integer ID numbers (unique), and there is no other data. I want to be able to visualize this data through some method of clustering. Not all individual nodes have to be visualized, but I want the nodes with the n most followers to be visualized clearly, and the surrounding area around that node would represent all the people who follow it. I'm modeling the spread of something throughout the map, so I need the nodes and areas around the nodes to change colors. Ideally, it would be a continuous visualization, but I don't mind it just taking snapshots at every ith iteration.
Additionally, I was thinking of having the clusters be separated such that:
if person A and person B have enough followers to be visualized individually, and person A and B are connected (one follows the other or maybe even both ways), then they are both visualized, but are visually separated from each other despite being connected so that the visualization is clearer.
Anyways, I was wondering whether there was a package in Python (preferably) or Java that would allow one to do this semi easily.
Gephi has a very nice GUI and an associated Java toolkit. You can experiment with visual layout in the GUI until you have everything looking the way you like and then code up your own version using the toolkit.

Categories