I want to implement a simple firewall using python which will block the access to a given list of sites. To do this I want to know how to block a specific site using python-iptables. For example how to block the access to www.facebook.com?
Try this link, it works perfect, use subprocess and say block.
or you can also use iptables as well
iptables -A OUTPUT -d www.facebook.com -j DROP
Related
I created a Spotipy script that pulls the last 50 songs I listened to and adds them, and their audio features, to a Google Sheets. However, I'd love to be able to run this script daily without me manually having to run it, but I have very little experience with CRON scheduling. I'm having trouble wrapping my head around how it can be run given all of the command line arguments I need to enter.
The code requires multiple command line arguments passed first, such as
export SPOTIPY_REDIRECT_URI="http://google.com"
export SPOTIPY_CLIENT_SECRET='secret'
and similar for the client ID.
Additionally, the first argument after the script call is the username username = sys.argv[1].
Most importantly, it prompts me to copy and paste a redirect URL into the command line, which is unique each run.
Is it at all possible to pass the redirect URL to the command line each time the script is run using CRON?
I think what you're looking to accomplish can be achieved in one of two ways.
First, you could write a shell script to handle the export commands and passing the redirect URI to your script. Second, with a few tweaks to your script, you should be able to run it headlessly, without the intermediary copy/paste step.
I'm going to explore the second route in my answer.
Regarding the environment variables, you have a couple options. Depending on your distribution, you may be able to set these variables in the crontab before running the job. Alternatively, you can set the exports in the same job you use to run your script, separating the commands by semicolon. See this answer for a detailed explanation.
Then, regarding the username argument: fortunately, script arguments can also be passed via cron. You just list the username after the script's name in the job; cron will pass it as an argument to your script.
Finally, regarding the redirect URI: if you change your redirect URI to something like localhost with a port number, the script will automatically redirect for you. This wasn't actually made clear to me in the spotipy documentation, but rather from the command line when authenticating with localhost: "Using localhost as redirect URI without a port. Specify a port (e.g. localhost:8080) to allow automatic retrieval of authentication code instead of having to copy and paste the URL your browser is redirected to." Following this advice, I was able to run my script without having to copy/paste the redirect URL.
Add something like "http://localhost:8080" to your app's Redirect URI's in the Spotify Dashboard and then export it in your environment and run your script--it should run without your input!
Assuming that works, you can put everything together in a job like this to execute your script daily at 17:30:
30 17 * * * export SPOTIPY_REDIRECT_URI="http://localhost:8080"; export SPOTIPY_CLIENT_SECRET='secret'; python your_script.py "your_username"
I am making some script in python which is run by Zabbix Action.
I want to add value in
Default subject and Default message in Action fields and then use this values in my script. So I am running script and forward all needed macros in script parameters like:
python /path/script.py -A "{HOST.NAME}" -B "{ALERT.MESSAGE}" -C "{ALERT.SUBJECT}"
and i can get only HOST.NAME value, for others I get only macros name but no value
Have you any idea where is the problem? Those macros are unavailable using by Custom scripts?
example
After doing some research & testing myself, it seems as if these Alert macros are indeed not available in a custom script operation.1
You have two options for a workaround:
If you need to be able to execute this script on the host itself, the quick option is to simply replace the macro with the actual text of your subject & alert names. Some testing is definitely necessary to make sure it will work with your environment, and it's not the most elegant solution, but something like this may well work with little extra effort:
python /path/script.py -A "{HOST.NAME}" -B "Problem: {EVENT.NAME}" -C "Problem started at {EVENT.TIME} on {EVENT.DATE}
Problem name: {EVENT.NAME}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}"
Verifying of course that e.g. the newlines do not break your custom script in your environment.
It doesn't look pretty but it may well be the easiest option.
If you can run the command on any host, the nicer option is to create a new Media type, which will let you use these variables and may even make adding this script to other hosts much easier. These macros can definitely be used as part of a custom Media type (see Zabbix Documentation - Media Types) which can include custom scripts.
You'll need to make a bash or similar script file for the Zabbix server to run (which means doing anything on a host outside the Zabbix server itself is going to be more difficult, but not impossible).
Once the media type is setup, as a bit of a workaround (not ideal, of course) you'll need a user to 'send' to; assigning that media type to the user and then 'sending' the alert to the user with that media type should execute your script with the macros just like executing the custom command.
1: While I did do my own testing on this, I couldn't found any documentation which specifically states that these macros aren't supported in this case, and they definitely look like they should be - more than happy to edit/revoke this answer if anyone can find documentation that confirms or denies this.
I should also explain how it works now, so I did sth like:
python /path/script.py -A "{HOST.NAME}" -B "Problem: {EVENT.NAME}" -C "Problem started at {EVENT.TIME} on {EVENT.DATE}
Problem name: {EVENT.NAME}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}"
works for me :)
I want to deploy my scrapy project on a ip that is not listed in the scrapy.cfg file , because the ip can change and i want to automate the process of deploying. i tried giving the ip of the server directly in the deploy command but it did not work. any suggestion to do this?
First, you should consider assigning a domain to the server, so you can always get to it regardless of its dynamic IP. DynDNS comes handy at times.
Second, you probably won't do the first, because you haven't got access to the server, or for whatever other reason. In that case, I suggest mimicking above behavior by using your system's hosts file. As described at wikipedia article:
The hosts file is a computer file used by an operating system to map hostnames to IP addresses.
For example, lets say you set your url to remotemachine in your scrapy.cfg. You can write a script that would edit the hosts file with the latest IP address, and execute it before deploying your spider. This approach has a benefit of having a system-wide effect, so if you are deploying multiple spiders, or using the same server for some other purpose, you don't have to update multiple configuration files.
This script could look something like this:
import fileinput
import sys
def update_hosts(hostname, ip):
if 'linux' in sys.platform:
hosts_path = '/etc/hosts'
else:
hosts_path = 'c:\windows\system32\drivers\etc\hosts'
for line in fileinput.input(hosts_path, inplace=True):
if hostname in line:
print "{0}\t{1}".format(hostname, ip)
else:
print line.strip()
if __name__ == '__main__':
hostname = sys.argv[1]
ip = sys.argv[2]
update_hosts(hostname, ip)
print "Done!"
Ofcourse,you should do additional argument checks, etc., this is just a quick example.
You can then run it prior deploying like this:
python updatehosts.py remotemachine <remote_ip_here>
If you want to take it a step further and add this functionality as a simple argument to scrapyd-deploy, you can go ahead and edit your scrapyd-deploy file (its just a Python script) to add the additional parameter and update the hosts file from within. But I'm not sure this is the best thing to do, since leaving this implementation separate and more explicit would probably be a better choice.
This is not something you can solve on the scrapyd side.
According to the source code of scrapyd-deploy, it requires the url to be defined in the [deploy] section of the scrapy.cfg.
One of the possible workarounds could be having a placeholder in scrapy.cfg which you would replace with a real IP address of the target server, before starting scrapyd-deploy.
We (friends and I) have a small dedicated server with nginx and the geoip module installed. (It's properly installed)
On that server we run a simple python script with UWSGI and bottle.
The script rotates banners.
(Our own banners for self-promotion)
We use this script to show banners of sites we own on other sites we own and rotate them so the user doesn't see always the same banner.
We have a problem with the geotargeting.
The following pastebin shows the python script.
http://pastebin.com/PqQ6TQeN
PAISES = ['AR', 'MX', 'CL'] means the Country_code.
TODOS is the tag to show the banner to all countries.
The different lists are for different banner sizes.
The URL for the rotating banners is like this.
exampleip /api/300x250
This calls the template for the size of 300x250 so the user will see a random banner from our list for that size.
That works fine.
But the geotargeting isn't working.
In the code (pastebin link) you can see the 300x250 banners have only the "AR" code for Argentina, so only users from that country should see those ads.
However they keep being displayed for other IPs.
And after adding this:
print('>>>>> ',request.headers.keys())
pais = request.get_header('GEOIP_CITY_COUNTRY_CODE')
print('=========== ' , pais, ' ==================')
(*Note: pais means country)
And running the UWSGI process via SSH It returns None for GEOIP_CITY_COUNTRY_CODE.
That means it isn't passing the parameters right to the python script.
The Geoip module is properly installed but this script isn't working properly.
I need to get it fixed.
I'm sure it's not something complicated and I'm just writting something wrong in the code. Maybe I'm not passing the parameters right to uwsgi or python.
In my Django project I need to be able to check whether a host on the LAN is up using an ICMP ping. I found this SO question which answers how to ping something in Python and this SO question which links to resources explaining how to use the sodoers file.
The Setting
A Device model stores an IP address for a host on the LAN, and after adding a new Device instance to the DB (via a custom view, not the admin) I envisage checking to see if the device responds to a ping using an AJAX call to an API which exposes the capability.
The Problem
However (from the docstring of a library suggested in the the first SO question) "Note that ICMP messages can only be sent from processes running as root."
I don't want to run Django as the root user, since it is bad practice. However this part of the process (sending and ICMP ping) needs to run as root. If with a Django view I wish to send off a ping packet to test the liveness of a host then Django itself is required to be running as root since that is the process which would be invoking the ping.
Solutions
These are the solutions I can think of, and my question is are there any better ways to only execute select parts of a Django project as root, other than these:
Run Django as root (please no!)
Put a "ping request" in a queue that another processes -- run as root -- can periodically check and fulfil. Maybe something like celery.
Is there not a simpler way?
I want something like a "Django run as root" library, is this possible?
Absolutely no way, do not run the Django code as root!
I would run a daemon as root (written in Python, why not) and then IPC between the Django instance and your daemon. As long as you're sure to validate the content and properly handle it (e.g. use subprocess.call with an array etc) and only pass in data (not commands to execute) it should be fine.
Here is an example client and server, using web.py
Server: http://gist.github.com/788639
Client: http://gist.github.com/788658
You'll need to install webpy.org but it's worth having around anyway. If you can hard-wire the IP (or hostname) into the server and remove the argument, all the better.
What's your OS here? You might be able to write a little program that does what you want given a parameter, and stick that in the sudoers file, and give your django user permission to run it as root.
/etc/sudoers
I don't know what kind of system you're on, but on any box I've encountered, one does not have to be root to run the command-line ping program (it has the suid bit set, so it becomes root as necessary). So you could just invoke that. It's a bit more overhead, but probably negligible compared to network latency.