PyCharm: Process finished with exit code -1073741515 (0xC0000135) - python

I've been trying to scrape data from websites using selenium in Python (3.7.4 for 32-bit).
The script runs through properly and it is supposed to concatenate the columns (using numpy) and then write the data frame into .csv files.
However, PyCharm (2018.3.7) gives me following error code at some point of the data scraping:
Process finished with exit code -1073741515 (0xC0000135)
I could not find anything specific about the error code. Does anyone know why it would occur?

Related

Checking if a .cmd file was executed successfully with python

I wrote a script that executes certain .cmd files. I'm trying to find a way to check if the execution finished with errors or not. This is how the final line of the .cmd file looks like, it shows you the number of warnings and errors:
https://i.stack.imgur.com/y6K1m.png (sorry i do not have enough rep to make the image embedded between the text :()
I tried saving the console output to a variable, and then check if the substring "Error(s)" was inside the text, but that didn't seem to work... I'm fairly new to python and I'm running out of ideas and to stuff to try, any suggestions would be appreciated. Let me know if you need more details. Thanks in advance guys!
Batch scripts usually have a return code that you can check if it completed successfully.
Check this link for the exit codes
And check this question for the python code to get them

Cannot run Python script in Power BI on dataset imported from Azure Blob

I imported a large dataset into Power BI from Azure Blob Container and now trying to run some custom Python plotting scripts, however the script runs consistently fail. I get the error below, it cites the DataSource, so I think the error could be coming from it. I have tried to run Python: created some dummy data and ran some transforms on it and it works perfectly. Has anyone encountered such error before, how can I fix this? I cannot install Anaconda, because I am using this for work in a large company and there is no budget allocated to Anaconda licenses.
P.S. Privacy level is set to Public everywhere.
P.P.S. The Python PATH looks correct.
DataSource.Error: ADO.NET: A problem occurred while processing your Python script.
Here are the technical details: [Expression.Error] We couldn't parse the input provided as a Time value.
Details:
DataSourceKind=Python
DataSourcePath=Python
Message=A problem occurred while processing your Python script.
Here are the technical details: [Expression.Error] We couldn't parse the input provided as a Time value.
ErrorCode=-2147467259
ExceptionType=Microsoft.PowerBI.Scripting.Python.Exceptions.PythonUnexpectedException
The error has to do with errors Power BI is making while parsing the data. The safest bet is to bind the datatypes of all the columns that contain NaN values in original data to text in Power Query (also possible in GUI). I wonder if there is a way to do deal with NaN values automatically.

python dash failed to fetch, failed to read from JSON

Recently i have been working with dash, and randomly but rarely these two errors show up. I have no idea why. My program is huge and i do not know what is the problematic part of it. I cannot really replicate the errors. Do you have any suggestions what to look for?
At the failed to fetch problem this is a more detailed error:
dash.exceptions.DependencyException: "dash_renderer" is registered but the path requested is not valid.

How do I setup my scraper to run multiple spiders using a script or exe?

I'm into my 3rd scrapy project and I'm getting a little bolder.
I want to give this program to non-technical users so either cmd line or preferably .exe
First off, I started using Crawler.Process, using the documentation I came up with this:
process = CrawlerProcess()
process.crawl(FirstSpider)
process.crawl(SecondSpider)
process.crawl(ThirdSpider)
process.crawl(LastSpider)
process.start()
Each spider is in its own .py file so I've imported each one into one spider and put this block of code at the bottom, if there's a better way I'm all ears.
I tried running this as is, in the command dialogue and it returns an error saying the scraper.list doesn't exist when I try to import the other spiders.
I can run each scraper from within the file using the VS code terminal using typical scrapy crawl xyz... so how do we wrap it up for end users?
Thanks in advance.
Thanks Furas, I apologize for the omission, I ended up solving my own problem. The script was too deep in the file structure and I had to move it further up. It was unable to read the Scraper.items folder contents because it wasn't traveling up then back down the file path.
I've almost wrapped up this project but I'm having trouble with the exporter, I've posted that question here:
Using Scrapy JsonItemsLinesExporter, returns no value

Implementing DCT2/DCT3 in Python

I am having some issues with my code for doing the final implementation for a data to image library using the JPEG DCT2/3 process. Linked below is the source code that I am using. I am using the python code under the SageMathCloud. I've been trying to figure out this specific error for the past several hours, and no matter how I do it, it just doesn't work. I get the same error message everytime, and I just can't track down the reason why.
Gist

Categories