sentence1 stringlengths 52 3.87M | sentence2 stringlengths 1 47.2k | label stringclasses 1
value |
|---|---|---|
def create_token_indices(self, tokens):
"""If `apply_encoding_options` is inadequate, one can retrieve tokens from `self.token_counts`, filter with
a desired strategy and regenerate `token_index` using this method. The token index is subsequently used
when `encode_texts` or `decode_texts` method... | If `apply_encoding_options` is inadequate, one can retrieve tokens from `self.token_counts`, filter with
a desired strategy and regenerate `token_index` using this method. The token index is subsequently used
when `encode_texts` or `decode_texts` methods are called. | entailment |
def apply_encoding_options(self, min_token_count=1, limit_top_tokens=None):
"""Applies the given settings for subsequent calls to `encode_texts` and `decode_texts`. This allows you to
play with different settings without having to re-run tokenization on the entire corpus.
Args:
min_... | Applies the given settings for subsequent calls to `encode_texts` and `decode_texts`. This allows you to
play with different settings without having to re-run tokenization on the entire corpus.
Args:
min_token_count: The minimum token count (frequency) in order to include during encoding. A... | entailment |
def encode_texts(self, texts, unknown_token="<UNK>", verbose=1, **kwargs):
"""Encodes the given texts using internal vocabulary with optionally applied encoding options. See
``apply_encoding_options` to set various options.
Args:
texts: The list of text items to encode.
... | Encodes the given texts using internal vocabulary with optionally applied encoding options. See
``apply_encoding_options` to set various options.
Args:
texts: The list of text items to encode.
unknown_token: The token to replace words that out of vocabulary. If none, those words... | entailment |
def decode_texts(self, encoded_texts, unknown_token="<UNK>", inplace=True):
"""Decodes the texts using internal vocabulary. The list structure is maintained.
Args:
encoded_texts: The list of texts to decode.
unknown_token: The placeholder value for unknown token. (Default value:... | Decodes the texts using internal vocabulary. The list structure is maintained.
Args:
encoded_texts: The list of texts to decode.
unknown_token: The placeholder value for unknown token. (Default value: "<UNK>")
inplace: True to make changes inplace. (Default value: True)
... | entailment |
def build_vocab(self, texts, verbose=1, **kwargs):
"""Builds the internal vocabulary and computes various statistics.
Args:
texts: The list of text items to encode.
verbose: The verbosity level for progress. Can be 0, 1, 2. (Default value = 1)
**kwargs: The kwargs fo... | Builds the internal vocabulary and computes various statistics.
Args:
texts: The list of text items to encode.
verbose: The verbosity level for progress. Can be 0, 1, 2. (Default value = 1)
**kwargs: The kwargs for `token_generator`. | entailment |
def pad_sequences(self, sequences, fixed_sentences_seq_length=None, fixed_token_seq_length=None,
padding='pre', truncating='post', padding_token="<PAD>"):
"""Pads each sequence to the same fixed length (length of the longest sequence or provided override).
Args:
sequen... | Pads each sequence to the same fixed length (length of the longest sequence or provided override).
Args:
sequences: list of list (samples, words) or list of list of list (samples, sentences, words)
fixed_sentences_seq_length: The fix sentence sequence length to use. If None, largest sen... | entailment |
def get_stats(self, i):
"""Gets the standard statistics for aux_index `i`. For example, if `token_generator` generates
`(text_idx, sentence_idx, word)`, then `get_stats(0)` will return various statistics about sentence lengths
across texts. Similarly, `get_counts(1)` will return statistics of to... | Gets the standard statistics for aux_index `i`. For example, if `token_generator` generates
`(text_idx, sentence_idx, word)`, then `get_stats(0)` will return various statistics about sentence lengths
across texts. Similarly, `get_counts(1)` will return statistics of token lengths across sentences.
... | entailment |
def build_embedding_weights(word_index, embeddings_index):
"""Builds an embedding matrix for all words in vocab using embeddings_index
"""
logger.info('Loading embeddings for all words in the corpus')
embedding_dim = list(embeddings_index.values())[0].shape[-1]
# setting special tokens such as UNK ... | Builds an embedding matrix for all words in vocab using embeddings_index | entailment |
def build_fasttext_wiki_embedding_obj(embedding_type):
"""FastText pre-trained word vectors for 294 languages, with 300 dimensions, trained on Wikipedia. It's recommended to use the same tokenizer for your data that was used to construct the embeddings. It's implemented as 'FasttextWikiTokenizer'. More information:... | FastText pre-trained word vectors for 294 languages, with 300 dimensions, trained on Wikipedia. It's recommended to use the same tokenizer for your data that was used to construct the embeddings. It's implemented as 'FasttextWikiTokenizer'. More information: https://fasttext.cc/docs/en/pretrained-vectors.html.
Arg... | entailment |
def build_fasttext_cc_embedding_obj(embedding_type):
"""FastText pre-trained word vectors for 157 languages, with 300 dimensions, trained on Common Crawl and Wikipedia. Released in 2018, it succeesed the 2017 FastText Wikipedia embeddings. It's recommended to use the same tokenizer for your data that was used to co... | FastText pre-trained word vectors for 157 languages, with 300 dimensions, trained on Common Crawl and Wikipedia. Released in 2018, it succeesed the 2017 FastText Wikipedia embeddings. It's recommended to use the same tokenizer for your data that was used to construct the embeddings. This information and more can be fin... | entailment |
def get_embeddings_index(embedding_type='glove.42B.300d', embedding_dims=None, embedding_path=None, cache=True):
"""Retrieves embeddings index from embedding name or path. Will automatically download and cache as needed.
Args:
embedding_type: The embedding type to load.
embedding_path: Path to ... | Retrieves embeddings index from embedding name or path. Will automatically download and cache as needed.
Args:
embedding_type: The embedding type to load.
embedding_path: Path to a local embedding to use instead of the embedding type. Ignores `embedding_type` if specified.
Returns:
The... | entailment |
def token_generator(self, texts, **kwargs):
"""Yields tokens from texts as `(text_idx, character)`
"""
for text_idx, text in enumerate(texts):
if self.lower:
text = text.lower()
for char in text:
yield text_idx, char | Yields tokens from texts as `(text_idx, character)` | entailment |
def token_generator(self, texts, **kwargs):
"""Yields tokens from texts as `(text_idx, sent_idx, character)`
Args:
texts: The list of texts.
**kwargs: Supported args include:
n_threads/num_threads: Number of threads to use. Uses num_cpus - 1 by default.
... | Yields tokens from texts as `(text_idx, sent_idx, character)`
Args:
texts: The list of texts.
**kwargs: Supported args include:
n_threads/num_threads: Number of threads to use. Uses num_cpus - 1 by default.
batch_size: The number of texts to accumulate in... | entailment |
def equal_distribution_folds(y, folds=2):
"""Creates `folds` number of indices that has roughly balanced multi-label distribution.
Args:
y: The multi-label outputs.
folds: The number of folds to create.
Returns:
`folds` number of indices that have roughly equal multi-label distribu... | Creates `folds` number of indices that has roughly balanced multi-label distribution.
Args:
y: The multi-label outputs.
folds: The number of folds to create.
Returns:
`folds` number of indices that have roughly equal multi-label distributions. | entailment |
def build_model(self, token_encoder_model, sentence_encoder_model,
trainable_embeddings=True, output_activation='softmax'):
"""Builds a model that first encodes all words within sentences using `token_encoder_model`, followed by
`sentence_encoder_model`.
Args:
to... | Builds a model that first encodes all words within sentences using `token_encoder_model`, followed by
`sentence_encoder_model`.
Args:
token_encoder_model: An instance of `SequenceEncoderBase` for encoding tokens within sentences. This model
will be applied across all sentenc... | entailment |
def process_save(X, y, tokenizer, proc_data_path, max_len=400, train=False, ngrams=None, limit_top_tokens=None):
"""Process text and save as Dataset
"""
if train and limit_top_tokens is not None:
tokenizer.apply_encoding_options(limit_top_tokens=limit_top_tokens)
X_encoded = tokenizer.encode_te... | Process text and save as Dataset | entailment |
def setup_data(X, y, tokenizer, proc_data_path, **kwargs):
"""Setup data
Args:
X: text data,
y: data labels,
tokenizer: A Tokenizer instance
proc_data_path: Path for the processed data
"""
# only build vocabulary once (e.g. training data)
train = ... | Setup data
Args:
X: text data,
y: data labels,
tokenizer: A Tokenizer instance
proc_data_path: Path for the processed data | entailment |
def split_data(X, y, ratio=(0.8, 0.1, 0.1)):
"""Splits data into a training, validation, and test set.
Args:
X: text data
y: data labels
ratio: the ratio for splitting. Default: (0.8, 0.1, 0.1)
Returns:
split data: X_train, X_val, X_test, y_train, y_... | Splits data into a training, validation, and test set.
Args:
X: text data
y: data labels
ratio: the ratio for splitting. Default: (0.8, 0.1, 0.1)
Returns:
split data: X_train, X_val, X_test, y_train, y_val, y_test | entailment |
def setup_data_split(X, y, tokenizer, proc_data_dir, **kwargs):
"""Setup data while splitting into a training, validation, and test set.
Args:
X: text data,
y: data labels,
tokenizer: A Tokenizer instance
proc_data_dir: Directory for the split and processed d... | Setup data while splitting into a training, validation, and test set.
Args:
X: text data,
y: data labels,
tokenizer: A Tokenizer instance
proc_data_dir: Directory for the split and processed data | entailment |
def load_data_split(proc_data_dir):
"""Loads a split dataset
Args:
proc_data_dir: Directory with the split and processed data
Returns:
(Training Data, Validation Data, Test Data)
"""
ds_train = Dataset.load(path.join(proc_data_dir, 'train.bin'))
ds_val = Dataset... | Loads a split dataset
Args:
proc_data_dir: Directory with the split and processed data
Returns:
(Training Data, Validation Data, Test Data) | entailment |
def build_model(self, token_encoder_model, trainable_embeddings=True, output_activation='softmax'):
"""Builds a model using the given `text_model`
Args:
token_encoder_model: An instance of `SequenceEncoderBase` for encoding all the tokens within a document.
This encoding is ... | Builds a model using the given `text_model`
Args:
token_encoder_model: An instance of `SequenceEncoderBase` for encoding all the tokens within a document.
This encoding is then fed into a final `Dense` layer for classification.
trainable_embeddings: Whether or not to fin... | entailment |
def _softmax(x, dim):
"""Computes softmax along a specified dim. Keras currently lacks this feature.
"""
if K.backend() == 'tensorflow':
import tensorflow as tf
return tf.nn.softmax(x, dim)
elif K.backend() is 'cntk':
import cntk
return cntk.softmax(x, dim)
elif K.ba... | Computes softmax along a specified dim. Keras currently lacks this feature. | entailment |
def _apply_options(self, token):
"""Applies various filtering and processing options on token.
Returns:
The processed token. None if filtered.
"""
# Apply work token filtering.
if token.is_punct and self.remove_punct:
return None
if token.is_stop ... | Applies various filtering and processing options on token.
Returns:
The processed token. None if filtered. | entailment |
def token_generator(self, texts, **kwargs):
"""Yields tokens from texts as `(text_idx, word)`
Args:
texts: The list of texts.
**kwargs: Supported args include:
n_threads/num_threads: Number of threads to use. Uses num_cpus - 1 by default.
batch_si... | Yields tokens from texts as `(text_idx, word)`
Args:
texts: The list of texts.
**kwargs: Supported args include:
n_threads/num_threads: Number of threads to use. Uses num_cpus - 1 by default.
batch_size: The number of texts to accumulate into a common wor... | entailment |
def _append(lst, indices, value):
"""Adds `value` to `lst` list indexed by `indices`. Will create sub lists as required.
"""
for i, idx in enumerate(indices):
# We need to loop because sometimes indices can increment by more than 1 due to missing tokens.
# Example: Sentence with no words aft... | Adds `value` to `lst` list indexed by `indices`. Will create sub lists as required. | entailment |
def _parse_spacy_kwargs(**kwargs):
"""Supported args include:
Args:
n_threads/num_threads: Number of threads to use. Uses num_cpus - 1 by default.
batch_size: The number of texts to accumulate into a common working set before processing.
(Default value: 1000)
"""
n_threads =... | Supported args include:
Args:
n_threads/num_threads: Number of threads to use. Uses num_cpus - 1 by default.
batch_size: The number of texts to accumulate into a common working set before processing.
(Default value: 1000) | entailment |
def update(self, indices):
"""Updates counts based on indices. The algorithm tracks the index change at i and
update global counts for all indices beyond i with local counts tracked so far.
"""
# Initialize various lists for the first time based on length of indices.
if self._pre... | Updates counts based on indices. The algorithm tracks the index change at i and
update global counts for all indices beyond i with local counts tracked so far. | entailment |
def finalize(self):
"""This will add the very last document to counts. We also get rid of counts[0] since that
represents document level which doesnt come under anything else. We also convert all count
values to numpy arrays so that stats can be computed easily.
"""
for i in rang... | This will add the very last document to counts. We also get rid of counts[0] since that
represents document level which doesnt come under anything else. We also convert all count
values to numpy arrays so that stats can be computed easily. | entailment |
def read_folder(directory):
"""read text files in directory and returns them as array
Args:
directory: where the text files are
Returns:
Array of text
"""
res = []
for filename in os.listdir(directory):
with io.open(os.path.join(directory, filename), encoding="utf-8") a... | read text files in directory and returns them as array
Args:
directory: where the text files are
Returns:
Array of text | entailment |
def read_pos_neg_data(path, folder, limit):
"""returns array with positive and negative examples"""
training_pos_path = os.path.join(path, folder, 'pos')
training_neg_path = os.path.join(path, folder, 'neg')
X_pos = read_folder(training_pos_path)
X_neg = read_folder(training_neg_path)
if limit... | returns array with positive and negative examples | entailment |
def imdb(limit=None, shuffle=True):
"""Downloads (and caches) IMDB Moview Reviews. 25k training data, 25k test data
Args:
limit: get only first N items for each class
Returns:
[X_train, y_train, X_test, y_test]
"""
movie_review_url = 'http://ai.stanford.edu/~amaas/data/sentiment/a... | Downloads (and caches) IMDB Moview Reviews. 25k training data, 25k test data
Args:
limit: get only first N items for each class
Returns:
[X_train, y_train, X_test, y_test] | entailment |
def to_absolute(self, x, y):
"""
Converts coordinates provided with reference to the center \
of the canvas (0, 0) to absolute coordinates which are used \
by the canvas object in which (0, 0) is located in the top \
left of the object.
:param x: x value in pixels
... | Converts coordinates provided with reference to the center \
of the canvas (0, 0) to absolute coordinates which are used \
by the canvas object in which (0, 0) is located in the top \
left of the object.
:param x: x value in pixels
:param y: x value in pixels
:return: No... | entailment |
def set_value(self, number: (float, int)):
"""
Sets the value of the graphic
:param number: the number (must be between 0 and \
'max_range' or the scale will peg the limits
:return: None
"""
self.canvas.delete('all')
self.canvas.create_image(0, 0, image=s... | Sets the value of the graphic
:param number: the number (must be between 0 and \
'max_range' or the scale will peg the limits
:return: None | entailment |
def _draw_background(self, divisions=10):
"""
Draws the background of the dial
:param divisions: the number of divisions
between 'ticks' shown on the dial
:return: None
"""
self.canvas.create_arc(2, 2, self.size-2, self.size-2,
styl... | Draws the background of the dial
:param divisions: the number of divisions
between 'ticks' shown on the dial
:return: None | entailment |
def draw_axes(self):
"""
Removes all existing series and re-draws the axes.
:return: None
"""
self.canvas.delete('all')
rect = 50, 50, self.w - 50, self.h - 50
self.canvas.create_rectangle(rect, outline="black")
for x in self.frange(0, self.x_max - self... | Removes all existing series and re-draws the axes.
:return: None | entailment |
def plot_point(self, x, y, visible=True, color='black', size=5):
"""
Places a single point on the grid
:param x: the x coordinate
:param y: the y coordinate
:param visible: True if the individual point should be visible
:param color: the color of the point
:param... | Places a single point on the grid
:param x: the x coordinate
:param y: the y coordinate
:param visible: True if the individual point should be visible
:param color: the color of the point
:param size: the point size in pixels
:return: The absolute coordinates as a tuple | entailment |
def plot_line(self, points: list, color='black', point_visibility=False):
"""
Plot a line of points
:param points: a list of tuples, each tuple containing an (x, y) point
:param color: the color of the line
:param point_visibility: True if the points \
should be individu... | Plot a line of points
:param points: a list of tuples, each tuple containing an (x, y) point
:param color: the color of the line
:param point_visibility: True if the points \
should be individually visible
:return: None | entailment |
def frange(start, stop, step, digits_to_round=3):
"""
Works like range for doubles
:param start: starting value
:param stop: ending value
:param step: the increment_value
:param digits_to_round: the digits to which to round \
(makes floating-point numbers much ea... | Works like range for doubles
:param start: starting value
:param stop: ending value
:param step: the increment_value
:param digits_to_round: the digits to which to round \
(makes floating-point numbers much easier to work with)
:return: generator | entailment |
def _load_new(self, img_data: str):
"""
Load a new image.
:param img_data: the image data as a base64 string
:return: None
"""
self._image = tk.PhotoImage(data=img_data)
self._image = self._image.subsample(int(200 / self._size),
... | Load a new image.
:param img_data: the image data as a base64 string
:return: None | entailment |
def to_grey(self, on: bool=False):
"""
Change the LED to grey.
:param on: Unused, here for API consistency with the other states
:return: None
"""
self._on = False
self._load_new(led_grey) | Change the LED to grey.
:param on: Unused, here for API consistency with the other states
:return: None | entailment |
def to_green(self, on: bool=False):
"""
Change the LED to green (on or off).
:param on: True or False
:return: None
"""
self._on = on
if on:
self._load_new(led_green_on)
if self._toggle_on_click:
self._canvas.bind('<Button... | Change the LED to green (on or off).
:param on: True or False
:return: None | entailment |
def to_red(self, on: bool=False):
"""
Change the LED to red (on or off)
:param on: True or False
:return: None
"""
self._on = on
if on:
self._load_new(led_red_on)
if self._toggle_on_click:
self._canvas.bind('<Button-1>', la... | Change the LED to red (on or off)
:param on: True or False
:return: None | entailment |
def to_yellow(self, on: bool=False):
"""
Change the LED to yellow (on or off)
:param on: True or False
:return: None
"""
self._on = on
if on:
self._load_new(led_yellow_on)
if self._toggle_on_click:
self._canvas.bind('<Butto... | Change the LED to yellow (on or off)
:param on: True or False
:return: None | entailment |
def _redraw(self):
"""
Forgets the current layout and redraws with the most recent information
:return: None
"""
for row in self._rows:
for widget in row:
widget.grid_forget()
offset = 0 if not self.headers else 1
for i, row in enumer... | Forgets the current layout and redraws with the most recent information
:return: None | entailment |
def remove_row(self, row_number: int=-1):
"""
Removes a specified row of data
:param row_number: the row to remove (defaults to the last row)
:return: None
"""
if len(self._rows) == 0:
return
row = self._rows.pop(row_number)
for widget in row... | Removes a specified row of data
:param row_number: the row to remove (defaults to the last row)
:return: None | entailment |
def add_row(self, data: list):
"""
Add a row of data to the current widget
:param data: a row of data
:return: None
"""
# validation
if self.headers:
if len(self.headers) != len(data):
raise ValueError
if len(data) != self.num... | Add a row of data to the current widget
:param data: a row of data
:return: None | entailment |
def add_row(self, data: list=None):
"""
Add a row of data to the current widget, add a <Tab> \
binding to the last element of the last row, and set \
the focus at the beginning of the next row.
:param data: a row of data
:return: None
"""
# validation
... | Add a row of data to the current widget, add a <Tab> \
binding to the last element of the last row, and set \
the focus at the beginning of the next row.
:param data: a row of data
:return: None | entailment |
def _read_as_dict(self):
"""
Read the data contained in all entries as a list of
dictionaries with the headers as the dictionary keys
:return: list of dicts containing all tabular data
"""
data = list()
for row in self._rows:
row_data = OrderedDict()
... | Read the data contained in all entries as a list of
dictionaries with the headers as the dictionary keys
:return: list of dicts containing all tabular data | entailment |
def _read_as_table(self):
"""
Read the data contained in all entries as a list of
lists containing all of the data
:return: list of dicts containing all tabular data
"""
rows = list()
for row in self._rows:
rows.append([row[i].get() for i in range(se... | Read the data contained in all entries as a list of
lists containing all of the data
:return: list of dicts containing all tabular data | entailment |
def add_row(self, data: list):
"""
Add a row of buttons each with their own callbacks to the
current widget. Each element in `data` will consist of a
label and a command.
:param data: a list of tuples of the form ('label', <callback>)
:return: None
"""
#... | Add a row of buttons each with their own callbacks to the
current widget. Each element in `data` will consist of a
label and a command.
:param data: a list of tuples of the form ('label', <callback>)
:return: None | entailment |
def add_row(self, key: str, default: str=None,
unit_label: str=None, enable: bool=None):
"""
Add a single row and re-draw as necessary
:param key: the name and dict accessor
:param default: the default value
:param unit_label: the label that should be \
a... | Add a single row and re-draw as necessary
:param key: the name and dict accessor
:param default: the default value
:param unit_label: the label that should be \
applied at the right of the entry
:param enable: the 'enabled' state (defaults to True)
:return: | entailment |
def reset(self):
"""
Clears all entries.
:return: None
"""
for i in range(len(self.values)):
self.values[i].delete(0, tk.END)
if self.defaults[i] is not None:
self.values[i].insert(0, self.defaults[i]) | Clears all entries.
:return: None | entailment |
def change_enables(self, enables_list: list):
"""
Enable/disable inputs.
:param enables_list: list containing enables for each key
:return: None
"""
for i, entry in enumerate(self.values):
if enables_list[i]:
entry.config(state=tk.NORMAL)
... | Enable/disable inputs.
:param enables_list: list containing enables for each key
:return: None | entailment |
def load(self, data: dict):
"""
Load values into the key/values via dict.
:param data: dict containing the key/values that should be inserted
:return: None
"""
for i, label in enumerate(self.keys):
key = label.cget('text')
if key in data.keys():
... | Load values into the key/values via dict.
:param data: dict containing the key/values that should be inserted
:return: None | entailment |
def get(self):
"""
Retrieve the GUI elements for program use.
:return: a dictionary containing all \
of the data from the key/value entries
"""
data = dict()
for label, entry in zip(self.keys, self.values):
data[label.cget('text')] = entry.get()
... | Retrieve the GUI elements for program use.
:return: a dictionary containing all \
of the data from the key/value entries | entailment |
def _pressed(self, evt):
"""
Clicked somewhere in the calendar.
"""
x, y, widget = evt.x, evt.y, evt.widget
item = widget.identify_row(y)
column = widget.identify_column(x)
if not column or not (item in self._items):
# clicked in the weekdays row or j... | Clicked somewhere in the calendar. | entailment |
def add(self, string: (str, list)):
"""
Clear the contents of the entry field and
insert the contents of string.
:param string: an str containing the text to display
:return:
"""
if len(self._entries) == 1:
self._entries[0].delete(0, 'end')
... | Clear the contents of the entry field and
insert the contents of string.
:param string: an str containing the text to display
:return: | entailment |
def remove(self):
"""
Deletes itself.
:return: None
"""
for e in self._entries:
e.grid_forget()
e.destroy()
self._remove_btn.grid_forget()
self._remove_btn.destroy()
self.deleted = True
if self._remove_callback:
... | Deletes itself.
:return: None | entailment |
def get(self):
"""
Returns the value for the slot.
:return: the entry value
"""
values = [e.get() for e in self._entries]
if len(self._entries) == 1:
return values[0]
else:
return values | Returns the value for the slot.
:return: the entry value | entailment |
def _redraw(self):
"""
Clears the current layout and re-draws all elements in self._slots
:return:
"""
if self._blank_label:
self._blank_label.grid_forget()
self._blank_label.destroy()
self._blank_label = None
for slot in self._slots:
... | Clears the current layout and re-draws all elements in self._slots
:return: | entailment |
def add(self, string: (str, list)):
"""
Add a new slot to the multi-frame containing the string.
:param string: a string to insert
:return: None
"""
slot = _SlotFrame(self,
remove_callback=self._redraw,
entries=self._slo... | Add a new slot to the multi-frame containing the string.
:param string: a string to insert
:return: None | entailment |
def clear(self):
"""
Clear out the multi-frame
:return:
"""
for slot in self._slots:
slot.grid_forget()
slot.destroy()
self._slots = [] | Clear out the multi-frame
:return: | entailment |
def clear(self):
"""
Clear the segment.
:return: None
"""
for _, frame in self._segments.items():
frame.configure(background=self._bg_color) | Clear the segment.
:return: None | entailment |
def set_value(self, value: str):
"""
Sets the value of the 7-segment display
:param value: the desired value
:return: None
"""
self.clear()
if '.' in value:
self._segments['period'].configure(background=self._color)
if value in ['0', '0.']:
... | Sets the value of the 7-segment display
:param value: the desired value
:return: None | entailment |
def _group(self, value: str):
"""
Takes a string and groups it appropriately with any
period or other appropriate punctuation so that it is
displayed correctly.
:param value: a string containing an integer or float
:return: None
"""
reversed_v = value[::-1... | Takes a string and groups it appropriately with any
period or other appropriate punctuation so that it is
displayed correctly.
:param value: a string containing an integer or float
:return: None | entailment |
def set_value(self, value: str):
"""
Sets the displayed digits based on the value string.
:param value: a string containing an integer or float value
:return: None
"""
[digit.clear() for digit in self._digits]
grouped = self._group(value) # return the parts, rev... | Sets the displayed digits based on the value string.
:param value: a string containing an integer or float value
:return: None | entailment |
def add_callback(self, callback: callable):
"""
Add a callback on change
:param callback: callable function
:return: None
"""
def internal_callback(*args):
try:
callback()
except TypeError:
callback(self.get())
... | Add a callback on change
:param callback: callable function
:return: None | entailment |
def set(self, value: int):
"""
Set the current value
:param value:
:return: None
"""
max_value = int(''.join(['1' for _ in range(self._bit_width)]), 2)
if value > max_value:
raise ValueError('the value {} is larger than '
... | Set the current value
:param value:
:return: None | entailment |
def get_bit(self, position: int):
"""
Returns the bit value at position
:param position: integer between 0 and <width>, inclusive
:return: the value at position as a integer
"""
if position > (self._bit_width - 1):
raise ValueError('position greater than the... | Returns the bit value at position
:param position: integer between 0 and <width>, inclusive
:return: the value at position as a integer | entailment |
def toggle_bit(self, position: int):
"""
Toggles the value at position
:param position: integer between 0 and 7, inclusive
:return: None
"""
if position > (self._bit_width - 1):
raise ValueError('position greater than the bit width')
self._value ^= (... | Toggles the value at position
:param position: integer between 0 and 7, inclusive
:return: None | entailment |
def set_bit(self, position: int):
"""
Sets the value at position
:param position: integer between 0 and 7, inclusive
:return: None
"""
if position > (self._bit_width - 1):
raise ValueError('position greater than the bit width')
self._value |= (1 << p... | Sets the value at position
:param position: integer between 0 and 7, inclusive
:return: None | entailment |
def clear_bit(self, position: int):
"""
Clears the value at position
:param position: integer between 0 and 7, inclusive
:return: None
"""
if position > (self._bit_width - 1):
raise ValueError('position greater than the bit width')
self._value &= ~(1... | Clears the value at position
:param position: integer between 0 and 7, inclusive
:return: None | entailment |
def _populate(self):
""" Populate this list by calling populate(), but only once. """
if not self._populated:
logging.debug("Populating lazy list %d (%s)" % (id(self), self.__class__.__name__))
try:
self.populate()
self._populated = True
... | Populate this list by calling populate(), but only once. | entailment |
def _register_admin(admin_site, model, admin_class):
""" Register model in the admin, ignoring any previously registered models.
Alternatively it could be used in the future to replace a previously
registered model.
"""
try:
admin_site.register(model, admin_class)
except admin.s... | Register model in the admin, ignoring any previously registered models.
Alternatively it could be used in the future to replace a previously
registered model. | entailment |
def core_choice_fields(metadata_class):
""" If the 'optional' core fields (_site and _language) are required,
list them here.
"""
fields = []
if metadata_class._meta.use_sites:
fields.append('_site')
if metadata_class._meta.use_i18n:
fields.append('_language')
return fi... | If the 'optional' core fields (_site and _language) are required,
list them here. | entailment |
def _monkey_inline(model, admin_class_instance, metadata_class, inline_class, admin_site):
""" Monkey patch the inline onto the given admin_class instance. """
if model in metadata_class._meta.seo_models:
# *Not* adding to the class attribute "inlines", as this will affect
# all instances from t... | Monkey patch the inline onto the given admin_class instance. | entailment |
def _with_inline(func, admin_site, metadata_class, inline_class):
""" Decorator for register function that adds an appropriate inline."""
def register(model_or_iterable, admin_class=None, **options):
# Call the (bound) function we were given.
# We have to assume it will be bound to admin_sit... | Decorator for register function that adds an appropriate inline. | entailment |
def auto_register_inlines(admin_site, metadata_class):
""" This is a questionable function that automatically adds our metadata
inline to all relevant models in the site.
"""
inline_class = get_inline(metadata_class)
for model, admin_class_instance in admin_site._registry.items():
_mon... | This is a questionable function that automatically adds our metadata
inline to all relevant models in the site. | entailment |
def get_linked_metadata(obj, name=None, context=None, site=None, language=None):
""" Gets metadata linked from the given object. """
# XXX Check that 'modelinstance' and 'model' metadata are installed in backends
# I believe that get_model() would return None if not
Metadata = _get_metadata_model(name)
... | Gets metadata linked from the given object. | entailment |
def populate_metadata(model, MetadataClass):
""" For a given model and metadata class, ensure there is metadata for every instance.
"""
content_type = ContentType.objects.get_for_model(model)
for instance in model.objects.all():
create_metadata_instance(MetadataClass, instance) | For a given model and metadata class, ensure there is metadata for every instance. | entailment |
def __instances(self):
""" Cache instances, allowing generators to be used and reused.
This fills a cache as the generator gets emptied, eventually
reading exclusively from the cache.
"""
for instance in self.__instances_cache:
yield instance
for inst... | Cache instances, allowing generators to be used and reused.
This fills a cache as the generator gets emptied, eventually
reading exclusively from the cache. | entailment |
def _resolve_value(self, name):
""" Returns an appropriate value for the given name.
This simply asks each of the instances for a value.
"""
for instance in self.__instances():
value = instance._resolve_value(name)
if value:
return value
... | Returns an appropriate value for the given name.
This simply asks each of the instances for a value. | entailment |
def _get_formatted_data(cls, path, context=None, site=None, language=None):
""" Return an object to conveniently access the appropriate values. """
return FormattedMetadata(cls(), cls._get_instances(path, context, site, language), path, site, language) | Return an object to conveniently access the appropriate values. | entailment |
def _get_instances(cls, path, context=None, site=None, language=None):
""" A sequence of instances to discover metadata.
Each instance from each backend is looked up when possible/necessary.
This is a generator to eliminate unnecessary queries.
"""
backend_context = {'vi... | A sequence of instances to discover metadata.
Each instance from each backend is looked up when possible/necessary.
This is a generator to eliminate unnecessary queries. | entailment |
def _resolve(value, model_instance=None, context=None):
""" Resolves any template references in the given value.
"""
if isinstance(value, basestring) and "{" in value:
if context is None:
context = Context()
if model_instance is not None:
context[model_instance._met... | Resolves any template references in the given value. | entailment |
def validate(options):
""" Validates the application of this backend to a given metadata
"""
try:
if options.backends.index('modelinstance') > options.backends.index('model'):
raise Exception("Metadata backend 'modelinstance' must come before 'model' backend")
... | Validates the application of this backend to a given metadata | entailment |
def _register_elements(self, elements):
""" Takes elements from the metadata class and creates a base model for all backend models .
"""
self.elements = elements
for key, obj in elements.items():
obj.contribute_to_class(self.metadata, key)
# Create the common Django... | Takes elements from the metadata class and creates a base model for all backend models . | entailment |
def _add_backend(self, backend):
""" Builds a subclass model for the given backend """
md_type = backend.verbose_name
base = backend().get_model(self)
# TODO: Rename this field
new_md_attrs = {'_metadata': self.metadata, '__module__': __name__ }
new_md_meta = {}
... | Builds a subclass model for the given backend | entailment |
def _set_seo_models(self, value):
""" Gets the actual models to be used. """
seo_models = []
for model_name in value:
if "." in model_name:
app_label, model_name = model_name.split(".", 1)
model = models.get_model(app_label, model_name)
... | Gets the actual models to be used. | entailment |
def validate(self):
""" Discover certain illegal configurations """
if not self.editable:
assert self.populate_from is not NotSet, u"If field (%s) is not editable, you must set populate_from" % self.name | Discover certain illegal configurations | entailment |
def populate_all_metadata():
""" Create metadata instances for all models in seo_models if empty.
Once you have created a single metadata instance, this will not run.
This is because it is a potentially slow operation that need only be
done once. If you want to ensure that everything is popu... | Create metadata instances for all models in seo_models if empty.
Once you have created a single metadata instance, this will not run.
This is because it is a potentially slow operation that need only be
done once. If you want to ensure that everything is populated, run the
populate_metad... | entailment |
def populate(self):
""" Populate this list with all views that take no arguments.
"""
from django.conf import settings
from django.core import urlresolvers
self.append(("", ""))
urlconf = settings.ROOT_URLCONF
resolver = urlresolvers.RegexURLResolver(r'^/', urlco... | Populate this list with all views that take no arguments. | entailment |
def block_splitter(data, block_size):
"""
Creates a generator by slicing ``data`` into chunks of ``block_size``.
>>> data = range(10)
>>> list(block_splitter(data, 2))
[[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]
If ``data`` cannot be evenly divided by ``block_size``, the last block will
simpl... | Creates a generator by slicing ``data`` into chunks of ``block_size``.
>>> data = range(10)
>>> list(block_splitter(data, 2))
[[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]
If ``data`` cannot be evenly divided by ``block_size``, the last block will
simply be the remainder of the data. Example:
>>> ... | entailment |
def round_geom(geom, precision=None):
"""Round coordinates of a geometric object to given precision."""
if geom['type'] == 'Point':
x, y = geom['coordinates']
xp, yp = [x], [y]
if precision is not None:
xp = [round(v, precision) for v in xp]
yp = [round(v, precisi... | Round coordinates of a geometric object to given precision. | entailment |
def flatten_multi_dim(sequence):
"""Flatten a multi-dimensional array-like to a single dimensional sequence
(as a generator).
"""
for x in sequence:
if (isinstance(x, collections.Iterable)
and not isinstance(x, six.string_types)):
for y in flatten_multi_dim(x):
... | Flatten a multi-dimensional array-like to a single dimensional sequence
(as a generator). | entailment |
def cli(input, verbose, quiet, output_format, precision, indent):
"""Convert text read from the first positional argument, stdin, or
a file to GeoJSON and write to stdout."""
verbosity = verbose - quiet
configure_logging(verbosity)
logger = logging.getLogger('geomet')
# Handle the case of file... | Convert text read from the first positional argument, stdin, or
a file to GeoJSON and write to stdout. | entailment |
def _get_geom_type(type_bytes):
"""Get the GeoJSON geometry type label from a WKB type byte string.
:param type_bytes:
4 byte string in big endian byte order containing a WKB type number.
It may also contain a "has SRID" flag in the high byte (the first type,
since this is big endian by... | Get the GeoJSON geometry type label from a WKB type byte string.
:param type_bytes:
4 byte string in big endian byte order containing a WKB type number.
It may also contain a "has SRID" flag in the high byte (the first type,
since this is big endian byte order), indicated as 0x20. If the SR... | entailment |
def dumps(obj, big_endian=True):
"""
Dump a GeoJSON-like `dict` to a WKB string.
.. note::
The dimensions of the generated WKB will be inferred from the first
vertex in the GeoJSON `coordinates`. It will be assumed that all
vertices are uniform. There are 4 types:
- 2D (X, ... | Dump a GeoJSON-like `dict` to a WKB string.
.. note::
The dimensions of the generated WKB will be inferred from the first
vertex in the GeoJSON `coordinates`. It will be assumed that all
vertices are uniform. There are 4 types:
- 2D (X, Y): 2-dimensional geometry
- Z (X, Y,... | entailment |
def loads(string):
"""
Construct a GeoJSON `dict` from WKB (`string`).
The resulting GeoJSON `dict` will include the SRID as an integer in the
`meta` object. This was an arbitrary decision made by `geomet, the
discussion of which took place here:
https://github.com/geomet/geomet/issues/28.
... | Construct a GeoJSON `dict` from WKB (`string`).
The resulting GeoJSON `dict` will include the SRID as an integer in the
`meta` object. This was an arbitrary decision made by `geomet, the
discussion of which took place here:
https://github.com/geomet/geomet/issues/28.
In order to be consistent with... | entailment |
def _header_bytefmt_byteorder(geom_type, num_dims, big_endian, meta=None):
"""
Utility function to get the WKB header (endian byte + type header), byte
format string, and byte order string.
"""
dim = _INT_TO_DIM_LABEL.get(num_dims)
if dim is None:
pass # TODO: raise
type_byte_str =... | Utility function to get the WKB header (endian byte + type header), byte
format string, and byte order string. | entailment |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.