How does wit-ai connect with python files in my computer? - python

I just started with Wit-ai and I'm trying to make the weather forecast bot in quickstart. In the quickstart, it mentions that the action of the bot (getForecast) should be a function defined in a python file (.py) in my computer. However, I'm not sure how Wit-ai connects with the python files in my computer? Like how does Wit-ai know which file to run when a function is called?
PS: I have downloaded the pywit examples and read through the code, but I still don't see how the Wit-ai platform will find the correct function in the correct file.

You will be writing the actions specifying the function name in wit
client = Wit(access_token=access_token, actions=actions)
actions = {
'send': send,
'merge': merge,
'select-joke': select_joke,
}
It checks with this name in stories that is trained. You can check https://github.com/wit-ai/pywit/blob/master/examples/joke.py for python code and it's respective wit story at https://wit.ai/patapizza/example-joke

Related

How to run a python script on Azure for CSV file analysis

I have a python script on my local machine that reads a CSV file and outputs some metrics. The end goal is to create a web interface where the user uploads the CSV file and the metrics are displayed, while all being hosted on Azure.
I want to use a VM on Azure to run this python script.
The script takes the CSV file and outputs metrics which are stored in CosmosDB.
A web interface reads from this DB and displays graphs from the data generated by the script.
Can someone elaborate on the steps I need to follow to achieve this? Detailed steps are not essentially required, but a brief overview with links to relevant learning sources would be helpful.
There's an article that lists the primary options for hosting sites in Azure: https://learn.microsoft.com/en-us/azure/developer/python/quickstarts-app-hosting
As Sadiq mentioned, Functions is probably your best choice as it will probably be less expensive, less maintenance, and can handle both the script and the web interface. Here is a python tutorial for that method: https://learn.microsoft.com/en-us/azure/developer/python/tutorial-vs-code-serverless-python-01
Option 2 would be to run a traditional website on an App Service plan, with background tasks handled either by Functions or a Webjob- they both use the webjobs SDK, so the code is very similar: https://learn.microsoft.com/en-us/learn/paths/deploy-a-website-with-azure-app-service/
VMs are an option if either of those two don't work, but it comes with significantly more administration. This learning path has info on how to do this. The website is built on the MEAN stack, but is applicable to Python as well: https://learn.microsoft.com/en-us/learn/paths/deploy-a-website-with-azure-virtual-machines/

I am not able to pass parameter in aws batch job?

I have a Dockerfile with the following lines:
FROM python
COPY sysargs.py /
CMD python sysargs.py --date Ref::date
And my python file sysargs.py looks like this:
import sys
print('The command line arguments are:')
a = sys.argv[1]
print(a)
I just want to pass parameter date and print the date but after passing date value I am getting output as "Ref::date".
Can someone help me what I have done wrong?
I am trying to replicate as mentioned in how to retrieve aws batch parameter value in python?.
In the context of your Dockerfile Ref::date is just a string, hence that is what is printed as output in the python script.
If you want date to be a value passed in from an external source, then you can use ARG. Take 10 minutes to read through this guide.
My experience with AWS Batch "parameters"
I have been working on a project for about 4 months now. One of my tasks was to connect several AWS services together to process in the Cloud the last uploaded file that an application had placed in a S3 bucket.
What I needed
The way this works is the following. Through a website, a user uploads a file that is sent to a back-end server, and then to a S3 bucket. This event triggers a AWS Lambda function, which inside creates and runs an instance of a AWS Batch Job, that has already been defined previously (based on a Docker image) and would retrieve from the S3 bucket the file to process it and the save in a database some results. By the way, all the code I am using is done with Python.
Everything worked as charm until I found it really hard to get as a parameter the filename of the file in the S3 bucket that generated the event, inside the python script that was being executed inside the Docker container, run by the AWS Batch Job.
What I did
After a lot of research and development, I came up with a solution for my problem. The issue was based on the fact that the word "parameter", for AWS Batch Jobs, is not what a user may expect. In return, we need to use containerOverrides, the way I show below: defining an "environment" variable value inside the running container by providing a pair of name and value of that variable.
# At some point we had defined aws_batch like this:
#
#aws_batch = boto3.client(
# service_name="batch",
# region_name='<OurRegion>',
# aws_access_key_id='<AWS_ID>',
# aws_secret_access_key='<AWS_KEY>',
#)
aws_batch.submit_job(
jobName='TheJobNameYouWant',
jobQueue='NameOfThePreviouslyDefinedQueue',
jobDefinition='NameOfThePreviouslyDefinedJobDefinition',
# parameters={ #THIS DOES NOT WORK
# 'FILENAME': FILENAME #THIS DOES NOT WORK
# }, #THIS DOES NOT WORK
containerOverrides={
'environment': [
{
'name': 'filename',
'value': 'name_of_the_file.png'
},
],
},
)
This way, from my Python script, inside the Docker container, I could access the environment variable value using the well-known os.getenv('<ENV_VAR_NAME>') function.
You can also check on your AWS console, under the Batch menu, both Job configuration and Container details tabs, to make sure everything makes sense. The container that the Job is running will never see the Job parameters. In the opposite way, it will know the environment variables.
Final notes
I do not know if there is a better way to solve this. So far, I share with all the community something that does work.
I have tested it myself, and the main idea came from reading the links that I list below:
AWSBatchJobs parameters (Use just as context info)
submit_job function AWS Docs (Ideal to learn about what kind of actions we are allowed to do or configure when creating a job)
I honestly hope this helps you and wish you a happy coding!

Can I use Electron without node.js

Im new to Javascript so I would like to keep it at the bare minimum. Is there a way that I can use the Electron to communicate with python script without having node.js? My app is just a basic app that takes some input from users from a html page and I need this text input to be processed in python and write an excel file. So there is not much happening in html so is there a simple way to transfer the input to python file? I want to use Electron because I need this html to be my UI and also I need to distribute this app.
I guess the answer is "no": the main process running node will always be there.
An Electron app consists of a JavaScript main process, and one or more JavaScript renderer processes. There is no built-in Python support. And the user will need Python already installed. So, it sounds like a poor fit for what you need.
The answers here may be useful, and will show how to call the python script. I took a quick look at the flexx toolkit mentioned there. It seems to work with the user's browser, rather than producing a single executable.
Recently i have done it with some sort of trick hope it will help you and there are the following step which i followed-
Created a stand alone python exe using pyinstaller and the exe has flask server internally then i put the flask server inside my node application.
Now we have to initiate our flask server and send a request to it for processing, i have done this with the help of "execFile" function as a child process, for which i have created a function and the code was something like that-
async function callFlask(){
var child = require('child_process').execFile;
child('path_to_python_exe ', function(err, data) {
if(err){
console.error(err);
return;
}
});
}
Now we have initiated our flask server then will send the request with the help of fetch request like
await callFlask().then(
await fetch('host_ip_defined_in_flask'+encodeURIComponent('data'))
Now further we can extend our then chain to get response from python if any and proceed further forexample -
await callFlask().then(
await fetch('host_ip_defined_in_flask'+encodeURIComponent('data'))
.then(res => res.text())
.then(body => console.log(body)))
Here, your output data which python return will be printed in console then you can make your node application behave differently depending on output returned by it.
Also you can package your app with available packagers for electron like electron-packager it will work like a charm.
Also there is are some disadvantage for using python as like it will increase your package size and the process will be difficult to kill from electron after processing so it will increase burden on host machine.
I am assuming that Explaining to create a flask server is not the scope of this question instead if you face any issues let me know, i hope it will help...

GEE Python API: Export image to Google Drive fails

Using GEE Python API in an application running with App Engine (on localhost), I am trying to export an image to a file in Google Drive. The task seems to start and complete successfully but no file is created in Google Drive.
I have tried to execute the equivalent javascript code in GEE code editor and this works, the file is created in Google Drive.
In python, I have tried various ways to start the task, but it always gives me the same result: the task completes but no file is created.
My python code is as follows:
landsat = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_123032_20140515').select(['B4', 'B3', 'B2'])
geometry = ee.Geometry.Rectangle([116.2621, 39.8412, 116.4849, 40.01236])
task_config = {
'description': 'TEST_todrive_desc',
'scale': 30,
'region': geometry,
'folder':'GEEtest'
}
task = ee.batch.Export.image.toDrive(landsat, 'TEST_todrive', task_config)
ee.batch.data.startProcessing(task.id, task.config)
# Note: I also tried task.start() instead of this last line but the problem is the same, task completed, no file created.
# Printing the task list successively
for i in range(10):
tasks = ee.batch.Task.list()
print(tasks)
time.sleep(5)
In the printed task list, the status of the task goes from READY to RUNNING and then COMPLETED. But after completion no file is created in Google Drive in my folder "GEEtest" (nor anywhere else).
What am I doing wrong?
I think that the file is been generated and stored on the google drive of the 'Service Account' used for Python API not on your private account that is normally used when using the web code editor.
You can't pass a dictionary of arguments directly in python. You need to pass it using the kwargs convention (do a web search for more info). Basically, you just need to preface the task_config argument with double asteriks like this:
task = ee.batch.Export.image.toDrive(landsat, 'TEST_todrive', **task_config)
Then proceed as you have (I assume your use of task.config rather than task_config in the following line is a typo). Also note that you can query the task directly (using e.g. task.status()) and it may give more information about when / why the task failed. This isn't well documented as far as I can tell but you can read about it in the API code.

How can I extend functionality of a Direct Connect client related to chat? Is there any way I can get pyDC to work?

I need to write code to reply when a particular message is seen in the hub chat.
I tried using PyDC but was not able to get it to work, some problem because it expects old wxpython libraries or something. T
he command line one works, but as far as I can see does not support chat. The GUI one tries to import shell from wx.lib.PyCrust but PyCrust has been renamed to wx.py. I tried importing shell from wx.py then the GUI started but was unable to connect to any hub. The command line one connects fine.
Is there any other way I can do what I want?
Eiskalt DC++ QT lets you write scripts in QTScript. I can use this to do what I need.

Categories