I recently started using Swagger to generate flask templates: http://editor.swagger.io/#/
My workflow is flawed, and I am wondering where I am going wrong. I:
Use the UI to write the API V1 .yaml
Generate the code using the UI editor, which downloads a stubbed out zip
Write the functions that were stubbed out
This part of the process is fine. However, let's say we want to add a new endpoint or change an existing endpoint. Now what? I:
reload the swagger editor
edit the yaml
generate the code, which downloads a new zip and blows away the old code
take the newly generated code and do a "self-merge" where I copy over the new stub into the old code and copy over the new yaml into the old
It seems there is a gap between the initial generation of the flask template and ongoing maintenance. What am I doing wrong?
Yeah, there isn't really a good workflow for that yet AFAICS.
One thing you can do is check the original generated code into git on a branch called "generated" or similar. Then merge that to master and start working on it. If, at a later point, you extend your swagger definition, you can generate the code again, switch to the generated branch, overwrite the existing code with the newly generated code, commit and merge to master again. If all you had were some additional endpoints, this should even work without any merge conflicts.
It would of course be nicer if the swagger tools had a concept of which code they generated and be able to update generated code, but until then this should be a bearable workaround.
Related
So for my project we can make custom react components that's built and the code is transferred to the root of the project. I automated that by a few os commands.
I have it so anytime you change a file and save it does those commands. But I want make it even more efficient though. So instead every save => compile the custom react components I only want it if you save a file within that folder, if not do the normal hot reloading without compiling the custom react code.
In short, I don't want to recompile the react files if a change wasn't done in the custom react code folder. Because right now my automated process does it on every save.
I'm thinking:
If there was a way to check what file hot reloaded from dash's hot reload method that would be great, or even access to that function. But I even looked in the source code I didn't see anything that allow me to do that.
Or If there's a callback that happens before the hot reload feature in dash
Or If there's a library or plugin in python that could help me out because I just started using it a couple of months ago.
I made custom react components with this tutorial: https://dash.plotly.com/plugins
Context
I'm working on a Data Science Project in which I'm running a data analysis task on a dataset (let's call it original dataset) and creating a processed dataset (let's call this one result). The last one can be queried by a user by creating different plots through use of a Dash application. The system also makes some predictions on an attribute of this dataset thanks to ML models. Everything will work on an external VM of my company.
What is my current "code"
Currently I have these python scripts that create the result dataset (except the Dashboard one):
concat.py (simply concatenates some files)
merger.py (merges different files in the project directory)
processer1.py (processes the first file needed for the analysis)
processer2.py (processes a second file needed for the analysis)
Dashboard.py (the Dash application)
ML.py (runs a classic ML task, creates a report and an updated result dataset with some predictions)
What I should obtain
I'm interested in creating this kind of solution that will run the VM:
Dashboard.py runs 24/7 based on the existence of the "result" dataset, without it it's useless.
Every time there's a change in the project directory (new files every month are added), the system triggers the execution of concat.py, merger.py, processer1.py and processer2.py. Maybe a python script and the watchdog package can help to create this trigger mechanism? I'm not sure.
Once the execution above is done, the ML.py file is executed based on the "result" dataset and it's uploaded to the dashboard.
The Dashboard.py it's restarted with new csv file.
I would like to receive some help to understand what are the technologies necessary to get what I would like. Something like an example or maybe a source, so I can fully understand and apply what is right. I know that maybe I have to use a python script to orchestrate the whole system, maybe the same script that observes the directory or maybe not.
The most important thing is that the dashboard operates always. This is what creates the need of running things simultaneously. Just when the "result" csv dataset is completed and uploaded it is necessary to restart it, I think that for the users is best to keep the service continuity.
The users will feed the dashboard with new files in the observed directory. It's necessary to create automation by using "triggers" to execute the code, since they are not skilled users and they will not be allowed to use the VM bash (I suppose). Maybe I could think about creating a repetitive execution instead, like every month.
Company won't let me grant another VM or similar if it's needed, so I should do it just with a single VM.
Premise
This is the first time that I have to get "in production" something, and I have no experience at all. Could anyone help me to get the best approach? Thanks in advance.
I have a FTP server working great using Python and the pyftplib library (https://code.google.com/p/pyftpdlib/). I would like to, on login (either anonymous or user), create a html file reflecting the latest state of the server in a nice looking way. For example, all the files that are on the server and their properties nicely separated and looking nice. I thought that since I was already doing everything in Python, and my html wouldn't be overly complex, I would just have python write the html file on log in, and then the user could open the html file for the information that was written seconds before.
My main problem is that when I override the "public callbacks" section of the handlers.py (or any section so far), no file is created that I can find. I am new to python, but it seems like a modification in the handlers.py file should affect the Handler class. Another idea I plan on trying is to override the handler base class with my "on_login" function that does create the html file.
What I am really asking for is
1) Advice from anybody who has done/tried this before
2) Any red flags that are going off in your head regarding my plan
3) Any other ideas (ideally strictly using python)
Thanks!
What worked for me was not editing the handler.py file, but rather creating my own subclass (myFTPHandler) and then redefining the onconnect method to write my html file then.
Thanks for the help though!
http://code.google.com/p/pyfpdf/wiki/Web2Py#Sample_Table_Listing
This would be my first time using web2py, I'm using it because the example code is exactly what I need for part of a project.
My problem is I have no idea where to put this code. I'm using Google App Engine.
To understand where to put that code, you'll need at least a basic understanding of how web2py applications are structured. I recommend at least looking at the Overview chapter of the book.
The function definitions shown (i.e., report(), listing(), and invoice()) would go in a controller file in your applications's '/controllers' folder (the scaffolding application includes a 'default.py' controller file, though you could rename that or create a new controller file). The calls to db.define_table would typically go in a model file in your application's '/models' folder (the scaffolding application includes a 'db.py' model file, though again, you could rename that or create a new model file).
Note, there was a recent discussion on the mailing list regarding getting pyfpdf to work on GAE.
I use cvs to maintain all my python snippets, notes, c, c++ code. As the hosting provider provides a public web- server also, I was thinking that I should convert the cvs automatically to a programming snippets website.
cvsweb is not what I mean.
doxygen is for a complete project and to browse the self-referencing codes online.I think doxygen is more like web based ctags.
I tried with rest2web, it is requires that I write /restweb headers and files to be .txt files and it will interfere with the programming language syntax.
An approach I have thought is:
1) run source-hightlight and create .html pages for all the scripts.
2) now write a script to index those script .htmls and create webpage.
3) Create the website of those pages.
before proceeding, I thought I shall discuss here, if the members have any suggestion.
What do do, when you want to maintain your snippets and notes in cvs and also auto generate it into a good website. I like rest2web for converting notes to html.
Run Trac on the server linked to the (svn) repository. The Trac wiki can conveniently refer to files and changesets. You get TODO tickets, too.
enscript or pygmentize (part of pygments) can be used to convert code to HTML. You can use a custom header or footer to link to the actual code for download.
I finally settled for rest2web. I had to do the following.
Use a separate python script to recursively copy the files in the CVS to a separate directory.
Added extra files index.txt and template.txt to all the directories which I wanted to be in the webpage.
The best thing about rest2web is that it supports python scripting within the template.txt, so I just ran a loop of the contents and indexed them in the page.
There is still lot more to go to automate the entire process. For eg. Inline viewing of programs and colorization, which I think can be done with some more trials.
I have the completed website here, It is called uthcode.