I am able to add a ppa using it but cannot remove. I cannot find the correct syntax to remove the ppa from sources.list. Here's my code:
import aptsources.sourceslist as s
repo = ('deb', 'http://ppa.launchpad.net/danielrichter2007/grub-customizer/ubuntu', 'xenial', ['main'])
sources = s.SourcesList()
sources.add(repo)
sources.save()
#doesn't work
sources.remove(repo)
I tried reading the docs found here but I still cannot find the format to call sources.remove(repo)
The SourcesList.remove() help text reads remove(source_entry), which indicates that what it wants is a SourceEntry object. As it hapens, sources.add() returns a SourceEntry object:
import aptsources.sourceslist as sl
sources = sl.SourcesList()
entry = sources.add('deb', 'mirror://mirrors.ubuntu.com/mirrors.txt', 'xenial', ['main'])
print(type(entry))
Outputs:
<class 'aptsources.sourceslist.SourceEntry'>
To remove the entry:
sources.remove(entry)
sources.save()
You can also disable it (which will leave a commented-out entry in sources.list:
entry.set_enabled(False)
sources.save()
I'm using this to do the removing for now.
import fileinput
filename = '/etc/apt/sources.list'
word = 'grub-customizer'
n = ""
remove = fileinput.input(filename, inplace=1)
for line in remove:
if word in line:
line = n
line.strip()
print line,
remove.close()
Related
I have code in jupyter notebook with the help of requests to get confirmation on whether that url existed or not and after that prints out the output into the text file. Here is the line code for that
import requests
Instaurl = open("dictionaries/insta.txt", 'w', encoding="utf-8")
cli = ['duolingo', 'ryanair', 'mcguinness.paddy', 'duolingodeutschland', 'duolingobrasil']
exist=[]
url = []
for i in cli:
r = requests.get("https://www.instagram.com/"+i+"/")
if r.apparent_encoding == 'Windows-1252':
exist.append(i)
url.append("instagram.com/"+i+"/")
Instaurl.write(url)
Let's say that inside the cli list, i accidentally added the same existing username as before into the text file (duolingo for example). Is there a way where if the requests found the same URL from the text file, it would not be added into the the text file again?
Thank you!
You defined a list:
cli = ['duolingo', ...]
It sounds like you would prefer to define a set:
cli = {'duolingo', ...}
That way, duplicates will be suppressed.
It happens for dups in the initial
assignment, and for any duplicate cli.add(entry) you might attempt later.
I'm on Python 3.8.3 on windows 10.
I am working on a pdfparser and I initially found slate3k to use with Python 3.X. I got a basic script working and started to test it on some PDFs. I had some issues with some text not being parsed properly so I started to look into PDFMiner.
After reading through the documentation for PDFMiner, I decided to install that a give it a go as there was some functionality from it that would be super useful for my use case.
However, I figured out soon after that PDFMiner doesn't work with Python 3.x. I uninstalled it and went back to using slate3k.
When I did this, I started to get a bunch of errors. I then uninstalled slate3k and re-installed hoping to fix it. Still got the errors. I re-installed PDFMiner and get rid of those errors but now I stuck with the below error and I'm at a loss for what to do next.
Exception has occurred: TypeError
__init__() missing 1 required positional argument: 'parser'
Here is the code (please note I haven't done much error trapping and it's still a work in progress, I'm more at the "proof of concept" stage):
import re, os
import slate3k as slate
# variable define
CurWkDir = os.getcwd()
tags= list()
rev= str()
FileName = str()
ProperFileName = str()
parsed = str()
# open file and create if it doesn't exist
xref = open('parsed from pdf xref.csv', 'w+')
xref.write('File Name, Rev, Tag')
for files in os.listdir(CurWkDir):
# find pdf files
if files.endswith('.pdf'):
tags.clear()
rev = ""
FileName = ""
ProperFileName = ""
#extract revision, file name, create proper file name
rev = re.findall(r'[0-9]{,2}[A-Z]{1}[0-9]{,2}',files)[0]
FileName = re.findall(r'[A-Z]+[0-9]+-[A-Z]+-[0-9]+-[0-9]+|[A-Z]+[0-9]+-[A-Z]+-[A-Z]+[0-9]+-[0-9]+|[A-Z]+[0-9]+-[A-Z]+-[A-Z]+[0-9]+[A-Z]+-[0-9]+', files)[0]
ProperFileName = FileName + "(" + rev[0: len(rev) - 1] + ")"
# Parse through PDF to find tags
fileopen = open(files, 'rb')
print("Reading", files)
raw = slate.PDF(fileopen)
print("Finished reading", files)
parsed = raw[0]
parsedstripped = parsed.replace("\n"," ")
rawtags = re.findall(r'[0-9]+[A-Z]+-[0-9]+|[0-9]+[A-Z]+[0-9]{1,5}|[0-9]{3}[A-Z]+[0-9]+', parsed, re.I)
fileopen.close
print(parsedstripped)
for t in rawtags:
if t not in tags:
row = ProperFileName + "," + rev + "," + t + "\n"
xref.write(row)
tags.append(t)
xref.close()
The error comes at Line 34
raw = slate.PDF(fileopen)
Any insight into what I did to break the functionality of slate3k is appreciated.
Thanks,
JT
I looked into the dependencies on slate3k by looking at pip show slate3k and I found a couple of programs it was dependent on.
I uninstalled slate3k, pdfminer3k and pdfminer and then re-installed slate3k.
Now everything seems to be working.
Using python 2 (atm) and ruamel.yaml 0.13.14 (RedHat EPEL)
I'm currently writing some code to load yaml definitions, but they are split up in multiple files. The user-editable part contains eg.
users:
xxxx1:
timestamp: '2018-10-22 11:38:28.541810'
<< : *userdefaults
xxxx2:
<< : *userdefaults
timestamp: '2018-10-22 11:38:28.541810'
the defaults are stored in another file, which is not editable:
userdefaults: &userdefaults
# Default values for user settings
fileCountQuota: 1000
diskSizeQuota: "300g"
I can process these together by loading both and concatinating the strings, and then running them through merged_data = list(yaml.load_all("{}\n{}".format(defaults_data, user_data), Loader=yaml.RoundTripLoader)) which correctly resolves everything. (when not using RoundTripLoader I get errors that the references cannot be resolved, which is normal)
Now, I want to do some updates via python code (eg. update the timestamp), and for that I need to just write back the user part. And that's where things get hairy. I sofar haven't found a way to just write that yaml document, not both.
First of all, unless there are multiple documents in your defaults file, you
don't have to use load_all, as you don't concatenate two documents into a
multiple-document stream. If you had by using a format string with a document-end
marker ("{}\n...\n{}") or with a directives-end marker ("{}\n---\n{}")
your aliases would not carry over from one document to another, as per the
YAML specification:
It is an error for an alias node to use an anchor that does not
previously occur in the document.
The anchor has to be in the document, not just in the stream (which can consist of multiple
documents).
I tried some hocus pocus, pre-populating the already represented dictionary
of anchored nodes:
import sys
import datetime
from ruamel import yaml
def load():
with open('defaults.yaml') as fp:
defaults_data = fp.read()
with open('user.yaml') as fp:
user_data = fp.read()
merged_data = yaml.load("{}\n{}".format(defaults_data, user_data),
Loader=yaml.RoundTripLoader)
return merged_data
class MyRTDGen(object):
class MyRTD(yaml.RoundTripDumper):
def __init__(self, *args, **kw):
pps = kw.pop('pre_populate', None)
yaml.RoundTripDumper.__init__(self, *args, **kw)
if pps is not None:
for pp in pps:
try:
anchor = pp.yaml_anchor()
except AttributeError:
anchor = None
node = yaml.nodes.MappingNode(
u'tag:yaml.org,2002:map', [], flow_style=None, anchor=anchor)
self.represented_objects[id(pp)] = node
def __init__(self, pre_populate=None):
assert isinstance(pre_populate, list)
self._pre_populate = pre_populate
def __call__(self, *args, **kw):
kw1 = kw.copy()
kw1['pre_populate'] = self._pre_populate
myrtd = self.MyRTD(*args, **kw1)
return myrtd
def update(md, file_name):
ud = md.pop('userdefaults')
MyRTD = MyRTDGen([ud])
yaml.dump(md, sys.stdout, Dumper=MyRTD)
with open(file_name, 'w') as fp:
yaml.dump(md, fp, Dumper=MyRTD)
md = load()
md['users']['xxxx2']['timestamp'] = str(datetime.datetime.utcnow())
update(md, 'user.yaml')
Since the PyYAML based API requires a class instead of an object, you need to
use a class generator, that actually adds the data elements to pre-populate on
the fly from withing yaml.load().
But this doesn't work, as a node only gets written out with an anchor once it is
determined that the anchor is used (i.e. there is a second reference). So actually the
first merge key gets written out as an anchor. And although I am quite familiar
with the code base, I could not get this to work properly in a reasonable amount of time.
So instead, I would just rely on the fact that there is only one key that matches
the first key of users.yaml at the root level of the dump of the combined updated
file and strip anything before that.
import sys
import datetime
from ruamel import yaml
with open('defaults.yaml') as fp:
defaults_data = fp.read()
with open('user.yaml') as fp:
user_data = fp.read()
merged_data = yaml.load("{}\n{}".format(defaults_data, user_data),
Loader=yaml.RoundTripLoader)
# find the key
for line in user_data.splitlines():
line = line.split('# ')[0].rstrip() # end of line comment, not checking for strings
if line and line[-1] == ':' and line[0] != ' ':
split_key = line
break
merged_data['users']['xxxx2']['timestamp'] = str(datetime.datetime.utcnow())
buf = yaml.compat.StringIO()
yaml.dump(merged_data, buf, Dumper=yaml.RoundTripDumper)
document = split_key + buf.getvalue().split('\n' + split_key)[1]
sys.stdout.write(document)
which gives:
users:
xxxx1:
<<: *userdefaults
timestamp: '2018-10-22 11:38:28.541810'
xxxx2:
<<: *userdefaults
timestamp: '2018-10-23 09:59:13.829978'
I had to make a virtualenv to make sure I could run the above with ruamel.yaml==0.13.14.
That version is from the time I was still young (I won't claim to have been innocent).
There have been over 85 releases of the library since then.
I can understand that you might not be able to run anything but
Python2 at the moment and cannot compile/use a newer version. But what
you really should do is install virtualenv (can be done using EPEL, but also without
further "polluting" your system installation), make a virtualenv for the
code you are developping and install the latest version of ruamel.yaml (and
your other libraries) in there. You can also do that if you need
to distribute your software to other systems, just install virtualenv there as well.
I have all my utilties under /opt/util, and managed
virtualenvutils a
wrapper around virtualenv.
For writing the user part, you will have to manually split the output of yaml.dump() multifile output and write the appropriate part back to users yaml file.
import datetime
import StringIO
import ruamel.yaml
yaml = ruamel.yaml.YAML(typ='rt')
data = None
with open('defaults.yaml', 'r') as defaults:
with open('users.yaml', 'r') as users:
raw = "{}\n{}".format(''.join(defaults.readlines()), ''.join(users.readlines()))
data = list(yaml.load_all(raw))
data[0]['users']['xxxx1']['timestamp'] = datetime.datetime.now().isoformat()
with open('users.yaml', 'w') as outfile:
sio = StringIO.StringIO()
yaml.dump(data[0], sio)
out = sio.getvalue()
outfile.write(out.split('\n\n')[1]) # write the second part here as this is the contents of users.yaml
I have executed ssh commands in remote machine using paramiko library and written output to text file. Now, I want to extract few values from a text file. The output of a text file looks as pasted below
b'\nMS Administrator\n(C) Copyright 2006-2016 LP\n\n[MODE]> SHOW INFO\n\n\nMode: \nTrusted Certificates\n1 Details\n------------\n\tDeveloper ID: MS-00c1\n\tTester ID: ms-00B1\n\tValid from: 2030-01-29T06:51:15Z\n\tValid until: 2030-01-30T06:51:15Z\n\t
how do i get the value of Developer ID and Tester ID. The file is huge.
As suggested by users I have written the snippet below.
file = open("Output.txt").readlines()
for lines in file:
word = re.findall('Developer\sID:\s(.*)\n', lines)[0]
print(word)
I see the error IndexError: list index out of range
If i remove the index. I see empty output
file = open("Output.txt").readlines()
developer_id=""
for lines in file:
if 'Developer ID' in line:
developer_id = line.split(":")[-1].strip()
print developer_id
You can use Regular expressions
text = """\nMS Administrator\n(C) Copyright 2006-2016 LP\n\n[MODE]> SHOW INFO\n\n\nMode: \nTrusted Certificates\n1 Details\n------------\n\tDeveloper ID: MS-00c1\n\tTester ID: ms-00B1\n\tValid from: 2030-01-29T06:51:15Z\n\tValid until: 2030-01-30T06:51:15Z\n\t"""
import re
developerID = re.match("Developer ID:(.+)\\n", text).group(0)
testerID = re.match("Tester ID:(.+)\\n", text).group(0)
If your output is consistent in format, you can use something as easy as line.split():
developer_id = line.split('\n')[11].lstrip()
tester_id = line.split('\n')[12].lstrip()
Again, this assumes that every line is using the same formatting. Otherwise, use regex as suggested above.
How can one write comments to a given file within sections?
If I have:
import ConfigParser
with open('./config.ini', 'w') as f:
conf = ConfigParser.ConfigParser()
conf.set('DEFAULT', 'test', 1)
conf.write(f)
I will get the file:
[DEFAULT]
test = 1
But how can I get a file with comments inside [DEFAULT] section, like:
[DEFAULT]
; test comment
test = 1
I know I can write codes to files by doing:
import ConfigParser
with open('./config.ini', 'w') as f:
conf = ConfigParser.ConfigParser()
conf.set('DEFAULT', 'test', 1)
conf.write(f)
f.write('; test comment') # but this gets printed after the section key-value pairs
Is this a possibility with ConfigParser? And I don't want to try another module because I need to keep my program as "stock" as possible.
You can use the allow_no_value option if you have Version >= 2.7
This snippet:
import ConfigParser
config = ConfigParser.ConfigParser(allow_no_value=True)
config.add_section('default_settings')
config.set('default_settings', '; comment here')
config.set('default_settings', 'test', 1)
with open('config.ini', 'w') as fp:
config.write(fp)
config = ConfigParser.ConfigParser(allow_no_value=True)
config.read('config.ini')
print config.items('default_settings')
will create an ini file like this:
[default_settings]
; comment here
test = 1
Update for 3.7
I've been dealing with configparser lately and came across this post. Figured I'd update it with information relevant to 3.7.
Example 1:
config = configparser.ConfigParser(allow_no_value=True)
config.set('SECTION', '; This is a comment.', None)
Example 2:
config = configparser.ConfigParser(allow_no_value=True)
config['SECTION'] = {'; This is a comment':None, 'Option':'Value')
Example 3: If you want to keep your letter case unchanged (default is to convert all option:value pairs to lowercase)
config = configparser.ConfigParser(allow_no_value=True)
config.optionxform = str
config.set('SECTION', '; This Comment Will Keep Its Original Case', None)
Where "SECTION" is the case-sensitive section name you want the comment added to. Using "None" (no quotes) instead of an empty string ('') will allow you to set the comment without leaving a trailing "=".
You can create variable that starts by # or ; character:
conf.set('default_settings', '; comment here', '')
conf.set('default_settings', 'test', 1)
created conf file is
[default_settings]
; comment here =
test = 1
ConfigParser.read function won't parse first value
config = ConfigParser.ConfigParser()
config.read('config.ini')
print config.items('default_settings')
gives
[('test','1')]
You could also use ConfigUpdater. It has many more convenience options to update configuration files in a minimal invasive way.
You would basically do:
from configupdater import ConfigUpdater
updater = ConfigUpdater()
updater.add_section('DEFAULT')
updater.set('DEFAULT', 'test', 1)
updater['DEFAULT']['test'].add_before.comment('test comment', comment_prefix=';')
with open('./config.ini', 'w') as f:
updater.write(f)
Freaky solution for the above :)
Note there is a side-effect, see if that suites you
config = configparser.ConfigParser(comment_prefixes='///')
config.set('section', '# cmt', 'comment goes here')
configparse will treat comments as variables, but real software would not.
This would even preserve the comments on writes done after read of the same ini file, which is a real game changer (disappearing comments are just horrible) :) and you don't need to do allow_no_value=True to allow empty value, just minor visual candy :)
so the ini file would look like:
[section]
# cmt = comment goes here
which pretty much gets the job done :)
please make sure to initialize comment_prefixes with a string that would never appear in your ini file just in case
This worked for me in 3.9.
Side effect on writing the already existing comments. They would not disappear which was normal default, but will be converted to a similar form # first = <remaining>, where first - first word of comment, remaining - remaining of the comment, which would change how file looks, so be carefull...