Finding pattern in Python - python

I have a file. I want to create a pattern. Then, I will calculate some values. I am using code below. How can I split these 3 values? Also, there white spaces in some columns as below pic.
title=movies[3].split(",")[1]
title
pattern=r"(\d+)[(\d+)-(\d+)]"
import re
re.findall(pattern,title)

You forgot to escape the characters [ and ]. Now it is seen as a definition of the defined set of characters between those brackets.
Change your regular expression to:
pattern = r"(\d+)\[(\d+)-(\d+)\]"
To add optional whitespaces to the regular expression you can use \s*. So the full regular expression would be
pattern = r"(\d+)\[\s*(\d+)-(\d+)\s*\]"

If you're looking specifically for the - character, you will need to escape it using \, as this will make the expression look for the character instead of using it for its other usage of defining a range.
This means your new pattern should look like: (\d+)[(\d+)\-(\d+)]
As a side-note, I can recommend using regex101 to double-check your patterns before using them!
Adding Matthias' answer onto here. If the square brackets are part of the title, you will also want to escape them too like so: (\d+)\[(\d+)\-(\d+)\]
This will look for two values separated by a hyphen, within square brackets.

Related

How to replace '..' and '?.' with single periods and question marks in pandas? df['column'].str.replace not working

This is a follow up to this SO post which gives a solution to replace text in a string column
How to replace text in a column of a Pandas dataframe?
df['range'] = df['range'].str.replace(',','-')
However, this doesn't seem to work with double periods or a question mark followed by a period
testList = ['this is a.. test stence', 'for which is ?. was a time']
testDf = pd.DataFrame(testList, columns=['strings'])
testDf['strings'].str.replace('..', '.').head()
results in
0 ...........e
1 .............
Name: strings, dtype: object
and
testDf['strings'].str.replace('?.', '?').head()
results in
error: nothing to repeat at position 0
Add regex=False parameter, because as you can see in the docs, regex it's by default True:
-regex bool, default True
Determines if assumes the passed-in pattern is a regular expression:
If True, assumes the passed-in pattern is a regular expression.
And ? . are special characters in regular expressions.
So, one way to do it without regex will be this double replacing:
testDf['strings'].str.replace('..', '.',regex=False).str.replace('?.', '?',regex=False)
Output:
strings
0 this is a. test stence
1 for which is ? was a time
Replace using regular expression. In this case, replace any sepcial character '.' followed immediately by white space. This is abit curly, I advice you go with #Mark Reed answer.
testDf.replace(regex=r'([.](?=\s))', value=r'')
strings
0 this is a. test stence
1 for which is ? was a time
str.replace() works with a Regex where . is a special character which denotes "any" character. If you want a literal dot, you need to escape it: "\.". Same for other special Regex characters like ?.
First, be aware that the Pandas replace method is different from the standard Python one, which operates only on fixed strings. The Pandas one can behave as either the regular string.replace or re.sub (the regular-expression substitute method), depending on the value of a flag, and the default is to act like re.sub. So you need to treat your first argument as a regular expression. That means you do have to change the string, but it also has the benefit of allowing you to do both substitutions in a single call.
A regular expression isn't a string to be searched for literally, but a pattern that acts as instructions telling Python what to look for. Most characters just ask Python to match themselves, but some are special, and both . and ? happen to be in the special category.
The easiest thing to do is to use a character class to match either . or ? followed by a period, and remember which one it was so that it can be included in the replacement, just without the following period. That looks like this:
testDF.replace(regex=r'([.?])\.', value=r'\1')
The [.?] means "match either a period or a question mark"; since they're inside the [...], those normally-special characters don't need to be escaped. The parentheses around the square brackets tell Python to remember which of those two characters is the one it actually found. The next thing that has to be there in order to match is the period you're trying to get rid of, which has to be escaped with a backslash because this one's not inside [...].
In the replacement, the special sequence \1 means "whatever you found that matched the pattern between the first set of parentheses", so that's either the period or question mark. Since that's the entire replacement, the following period is removed.
Now, you'll notice I used raw strings (r'...') for both; that keeps Python from doing its own interpretation of the backslashes before replace can. If the replacement were just '\1' without the r it would replace them with character code 1 (control-A) instead of the first matched group.
To replace both the ? and . at the same time you can separate by | (the regex OR operator).
testDf['strings'].str.replace('\?.|\..', '.')
Prefix the .. with a \, because you need to escape as . is a regex character:
testDf['strings'].str.replace('\..', '.')
You can do the same with the ?, which is another regex character.
testDf['strings'].str.replace('\?.', '.')

regex for matching german characters in python

Could someone help me on regex to match German words/sentences in
python? It does not work on jupyter notebook. I tried same in jsfiddle
it works fine. I tried using this below script but does not work
import re
pattern = re.compile(r'\[^a-zA-Z0-9äöüÄÖÜß]\\', re.UNICODE)
print(pattern.search(text))
Your expression will always fail:
\[^a-zA-Z0-9äöüÄÖÜß]\\
Broken down, you require
[ # literally
^ # start of the line / text
a-z # literally, etc.
The problem is that you require a [ literally right before the start of a line which can never be true (either there's nothing or a newline). So in the end, either remove the backslash to get a proper character class as in:
[^a-zA-Z0-9äöüÄÖÜß]+
But this will surely not match the words you're looking for (quite the opposite). So either use something as simple as \w+ or the solution proposed by #Wiktor in the comments section.
The square brackets define a range of characters you want to look for, however the '^' negates these characters if it appears within the character class.
If you want to specify the beginning of the line you need to put the '^' before the brackets.
Also you need to add a multiplier behind the class to search for more than just one character in this case:
r'^[a-zA-Z0-9äöüÄÖÜß]+'
One ore more characters contained in the brackets are matched as long as they are not seperated by any other character not listed between '[]'
Here's the link to the official documentation

Regular expression match / split

I am having some trouble trying to figure out how to use regular expressions in python. Ultimately I am trying to do what sscanf does for me in C.
I am trying to match given strings that look like so:
12345_arbitrarystring_2020_05_20_10_10_10.dat
I (seem) to be able to validate this format by calling match on the following regular expression
regex = re.compile('[0-9]{5}_.+_[0-9]{4}([-_])[0-9]{2}([-_])[0-9]{2}([-_])[0-9]{2}([:_])[0-9]{2}([:_])[0-9]{2}\\.dat')
(Note that I do allow for a few other separators then just '_')
I would like to split the given string on these separators so I do:
regex = re.compile('[_\\-:.]+')
parts = regex.split(given_string)
This is all fine .. the problem is that I would like my 'arbitrarystring' part to include '-' and '_' and the last split currently, well, splits them.
Other than manually cutting the timestamp and the first 5 digits off that given string, what can I do to get that arbitrarystring part?
You could use a capturing group to get the arbitrarystring part and omit the other capturing groups.
You could for example use a character class to match 1+ word characters or a hyphen using [\w-]+
If you still want to use split, you could add capturing groups for the first and the second part, and split only those groups.
^[0-9]{5}_([\w-]+)_[0-9]{4}[-_][0-9]{2}[-_][0-9]{2}[-_][0-9]{2}[:_][0-9]{2}[:_][0-9]{2}\.dat$
^^^^^^^^
Regex demo
It seems to be possible to cut down your regex to validate the whole pattern to:
^\d{5}_(.+?)_\d{4}[-_](?:\d{2}[-_]){2}(?:\d{2}[:_]){2}\d{2}\.dat$
Refer to group 1 for your arbitrary string.
Online demo
Quick reminder: You didn't seem to have used raw strings, but instead escaping with a double backslash. Python has raw strings which makes you don't have to escape backslashes nomore.

Python regex match all sentences include either wordA or wordB [duplicate]

I'm creating a javascript regex to match queries in a search engine string. I am having a problem with alternation. I have the following regex:
.*baidu.com.*[/?].*wd{1}=
I want to be able to match strings that have the string 'word' or 'qw' in addition to 'wd', but everything I try is unsuccessful. I thought I would be able to do something like the following:
.*baidu.com.*[/?].*[wd|word|qw]{1}=
but it does not seem to work.
replace [wd|word|qw] with (wd|word|qw) or (?:wd|word|qw).
[] denotes character sets, () denotes logical groupings.
Your expression:
.*baidu.com.*[/?].*[wd|word|qw]{1}=
does need a few changes, including [wd|word|qw] to (wd|word|qw) and getting rid of the redundant {1}, like so:
.*baidu.com.*[/?].*(wd|word|qw)=
But you also need to understand that the first part of your expression (.*baidu.com.*[/?].*) will match baidu.com hello what spelling/handle????????? or hbaidu-com/ or even something like lkas----jhdf lkja$##!3hdsfbaidugcomlaksjhdf.[($?lakshf, because the dot (.) matches any character except newlines... to match a literal dot, you have to escape it with a backslash (like \.)
There are several approaches you could take to match things in a URL, but we could help you more if you tell us what you are trying to do or accomplish - perhaps regex is not the best solution or (EDIT) only part of the best solution?

Return multiple matches within a string using a regex

I'm using the following regex to clean up a document that has had apostrophes accidentally replaced with double quotes:
([a-zA-Z]\"[a-zA-Z])
That find's the first pattern match within the string, but not any subsequent ones. I've used the '*' operator after the group, which I understood would return multiple matches of that pattern, but this returns none. I've tested the regex here by adding double quotes to the example string.
Does anyone know what the operator I need is for this example?
Thanks
You might need to turn on global matching, which in Python is done by using re.findall() instead of re.search(). On Regexr, the global flag is enabled like this:
regex flags menu on top right corner http://puu.sh/kgLFC/5958420d09.png

Categories