The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'repo', 'stars', 'creation_date', 'file_path'})
This happened while the json dataset builder was generating data using
hf://datasets/Sheerio/SynPrune-Python/raw/negative/negative_raw.jsonl (at revision 24e9bb76cc540cca010a7b25ce76c3d90a74834f)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
function: string
creation_date: timestamp[s]
repo: string
file_path: string
stars: int64
label: int64
to
{'function': Value('string'), 'label': Value('int64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1339, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 972, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'repo', 'stars', 'creation_date', 'file_path'})
This happened while the json dataset builder was generating data using
hf://datasets/Sheerio/SynPrune-Python/raw/negative/negative_raw.jsonl (at revision 24e9bb76cc540cca010a7b25ce76c3d90a74834f)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
function string | label int64 |
|---|---|
def SimpleSector(sides=0, radius=1.0, startangle=0.0, endangle=45.0):
newpoints = []
startangle = radians(startangle)
endangle = radians(endangle)
sides += 1
newpoints.append([0, 0, 0])
angle = (endangle - startangle) / sides
x = cos(startangle) * radius
y = sin(startangle) * radius
... | 1 |
def requires_cuda_not_available() -> pytest.MarkDecorator:
return pytest.mark.skipif(torch.cuda.is_available(), reason="CUDA is available") | 0 |
def _convert_train_id_to_eval_id(prediction, train_id_to_eval_id):
"""Converts the predicted label for evaluation.
There are cases where the training labels are not equal to the evaluation
labels. This function is used to perform the conversion so that we could
evaluate the results on the evaluation server.
... | 1 |
def set_response(self, response: ChatCompletion):
"""
Set the mock to return a specific response.
:param response: A ChatCompletion response to return.
"""
self.chat.completions.create.return_value = response | 0 |
async def run_local_search(
query: str,
sv: SessionVariables,
) -> SearchResult:
"""Run local search."""
print(f"Local search query: {query}") # noqa T201
# build local search engine
response_placeholder = st.session_state[
f"{SearchType.Local.value.lower()}_response_placeholder"
]... | 0 |
def _format_vat_cl(self, values):
identification_types = [self.env.ref('l10n_latam_base.it_vat').id, self.env.ref('l10n_cl.it_RUT').id,
self.env.ref('l10n_cl.it_RUN').id]
partner_country_is_chile = (values.get('country_id') == self.env.ref('base.cl').id) or (
... | 1 |
def _server_loop(self):
"""Main server loop in a separate thread"""
print("Server thread started")
self.socket.settimeout(1.0) # Timeout to allow for stopping
while self.running:
try:
# Accept new connection
try:
... | 0 |
def log_loss(name, loss, loss_dict, use_video):
# only calculate loss for video
if use_video == 0:
loss.data = torch.tensor(0.0, device=device, dtype=dtype)
all_reduce_sum(loss.data)
num_video = torch.tensor(use_video, device=device, dtype=dtype)
all_reduce_sum(nu... | 0 |
def init_bigvgan():
global bigvgan_model, hifigan_model, sv_cn_model
from BigVGAN import bigvgan
bigvgan_model = bigvgan.BigVGAN.from_pretrained(
"%s/GPT_SoVITS/pretrained_models/models--nvidia--bigvgan_v2_24khz_100band_256x" % (now_dir,),
use_cuda_kernel=False,
) # if True, Run... | 0 |
def test_get_reward_funcs(self):
"""Test get_reward_funcs with various reward functions."""
reward_names = [
"accuracy",
"format",
"reasoning_steps",
"cosine",
"repetition_penalty",
"length",
"tag_count",
... | 0 |
def test_parse_hparam_args__equals():
hparam_args = ['--foo=HParams(boo=1)']
assert parse_hparam_args(hparam_args) == {'foo': HParams(boo=1)} | 1 |
def file_exists(file_path: str) -> bool:
"""
Check if a file exists.
Args:
file_path (str): The path to the file.
Returns:
bool: True if the file exists, False otherwise.
"""
return os.path.exists(file_path) | 0 |
def step(self, observations, states):
vec_obs = self.obs_vectorizer.to_vecs(observations)
feed_dict = {
self.seq_lens_ph: [1] * len(observations),
self.is_init_state_ph: [False] * len(observations),
self.obs_ph: [[x] for x in vec_obs],
self.mask_ph: [[... | 1 |
def _get_backend(fname):
in_doc = InputDocument(
path_or_stream=fname,
format=InputFormat.ASCIIDOC,
backend=AsciiDocBackend,
)
doc_backend = in_doc._backend
return doc_backend | 0 |
def test_request_with_query_params():
"""Test a request with query parameters."""
request = RequestModel(
name="Request with query params",
method="GET",
url="https://example.com/api/search",
params=[
QueryParam(name="q", value="test query"),
QueryParam(na... | 0 |
def get_all_songs():
songs = []
for root, dirs, files in os.walk(song_dir):
for file in files:
if file.endswith(".mp3"):
songs.append(file)
return songs | 0 |
def valMeasuredParameter(self, obj):
"""Function to add ParameterName
:param obj: element to add ParameterName
"""
valuesQAStats = []
valuesQAFlags = []
valuesParameter = []
for i in self.parModis:
for val in i.retMeasure().values():
... | 1 |
def __init__(
self,
config: "AgentConfig",
id: str | None = None,
name: str | None = None,
agent0: "Agent|None" = None,
log: Log.Log | None = None,
paused: bool = False,
streaming_agent: "Agent|None" = None,
created_at: datetime | None = None,
... | 0 |
def setPixmap(self, pixmap):
self.imageLabel.setPixmap(pixmap)
self.hasImage = True | 1 |
def __repr__(self):
return '[%s] type=%s, value=%s' % (self.name, self.type, str(self.value)) | 1 |
def extract_url_content(url):
downloaded = trafilatura.fetch_url(url)
content = trafilatura.extract(downloaded)
return {"url":url, "content":content} | 0 |
def get_conv_template(name: str) -> Conversation:
"""Get a conversation template."""
return conv_templates[name].copy() | 0 |
def no_stream_requests(ques, output_file):
url = 'https://qanything-local-test-265.site.youdao.com/api/local_doc_qa/local_doc_chat'
headers = {'content-type': 'application/json'}
data = {
"kb_ids": [
"KBf46828db208c4289a120a34f0fc96147",
"KBc2440f13e98f4736b5ef81cfaebef3a... | 0 |
def test_query_ignore_older(self):
"""
wineventlog - Query by time (ignore_older than 2s)
"""
self.write_event_log(">=2 seconds old", eventID=20)
time.sleep(2)
self.write_event_log("~0 seconds old", eventID=10)
evts = self.read_events(config={
"eve... | 1 |
def testLargePromptHint2(self):
local_pdf_path = os.path.join(os.path.dirname(__file__), "gnarly_pdfs", "large_prompt_hint2.pdf")
anchor_text = get_anchor_text(local_pdf_path, 2, pdf_engine="pdfreport")
print(anchor_text)
print(len(anchor_text))
self.assertLessEqual(len(anc... | 0 |
def test_soft_delete_instance(self):
self._test_compute_api('soft_delete_instance', 'cast',
instance=self.fake_instance_obj) | 1 |
def hsv_to_rgb_handler(converter: TensorFlowConverter, tf_op: "tf.Operation"):
raise NotImplementedError(f"[TensorFlowConverter] {tf_op.type} is not supported yet.") | 1 |
def test_accuracy_reward_correct_answer(self):
"""Test accuracy_reward with a correct answer."""
completion = [[{"content": r"\boxed{\frac{63}{400}}"}]]
solution = [r"\frac{63}{400}"]
rewards = accuracy_reward(completion, solution)
self.assertEqual(rewards[0], 1.0) | 0 |
def repr_instance(self, x, level):
try:
s = builtins.repr(x)
# Bugs in x.__repr__() can cause arbitrary
# exceptions -- then make up something
except Exception:
return '<%s instance at %#x>' % (x.__class__.__name__, id(x))
if len(s) > self.maxo... | 1 |
def __init__(self, model):
self.model = model
model.changed.connect(self.model_changed) | 1 |
def _dispatch(self, method, params):
"""Dispatches the XML-RPC method.
XML-RPC calls are forwarded to a registered function that
matches the called XML-RPC method name. If no such function
exists then the call is forwarded to the registered instance,
if available.
I... | 1 |
def current_temperature(self):
"""Return the current temperature."""
return self._current_temperature | 1 |
def transition_power_noise_accumulator(self, num_steps):
"""Computes power sums in closed form."""
def _pack_and_reshape(*values):
return array_ops.reshape(
array_ops.stack(axis=1, values=values),
array_ops.concat(values=[array_ops.shape(num_steps), [2, 2]], axis=0))
num_steps =... | 1 |
def test_get_not_found(self):
"""Test retrieving a value that doesn't exist in the cache."""
cache = TimedCache(datetime.timedelta(seconds=10))
# Key doesn't exist
assert cache.get("nonexistent_key") is TimedCache.NOT_FOUND | 0 |
def create_check_constraint(self, name, source, condition,
schema=None, **kw):
"""Issue a "create check constraint" instruction using the
current migration context.
e.g.::
from alembic import op
from sqlalchemy.sql import column, func... | 1 |
def update_profile_model(file_path: str):
profile_rules: dict = http_get(
url='https://miot-spec.org/instance/translate/models')
if not profile_rules and 'models' not in profile_rules and not isinstance(
profile_rules['models'], dict):
raise ValueError('Failed to get profile rule')
... | 0 |
def execute_command(self, command):
"""Execute a command in the main Blender thread"""
try:
return self._execute_command_internal(command)
except Exception as e:
print(f"Error executing command: {str(e)}")
traceback.print_exc()... | 0 |
def __str__(self):
return '.'.join(map(str, self)) | 1 |
def _set_retry_after(self, value):
if value is None:
if 'retry-after' in self.headers:
del self.headers['retry-after']
return
elif isinstance(value, datetime):
value = http_date(value)
else:
value = str(value)
self.heade... | 1 |
def test_lucene_sanitize():
# Call the function with test data
queries = [
(
'This has every escape character + - && || ! ( ) { } [ ] ^ " ~ * ? : \\ /',
'\\This has every escape character \\+ \\- \\&\\& \\|\\| \\! \\( \\) \\{ \\} \\[ \\] \\^ \\" \\~ \\* \\? \\: \\\\ \\/',
... | 0 |
def __str__(self):
s = 'app 0x%02x, verb 0x%02x, len %d' % (self.app, self.verb, len(self.data))
if len(self.data) > 0:
s += ', data %s' % hexlify(self.data)
return s | 1 |
async def mock_embedding_func(texts):
return np.random.rand(len(texts), 10) # 返回10维随机向量 | 0 |
def do_unrealize(self):
if self._msg is not None:
self._api.cancel(self._msg) | 1 |
async def initialize(
self,
connection_type: str,
server_url: str | None = None,
) -> None:
"""Initialize the MCP agent with the appropriate connection."""
logger.info(f"Initializing MCPAgent with {connection_type} connection...")
if connection_type == "stdio":
... | 0 |
def flake8(file_path: str) -> str:
"""Run flake8 on a given file and return the output as a string"""
if Path(file_path).suffix != ".py":
return ""
cmd = registry.get("LINT_COMMAND", "flake8 --isolated --select=F821,F822,F831,E111,E112,E113,E999,E902 {file_path}")
# don't use capture_output beca... | 0 |
def get_youtube_cache_path() -> str:
"""
Gets the path to the YouTube cache file.
Returns:
path (str): The path to the YouTube cache folder
"""
return os.path.join(get_cache_path(), 'youtube.json') | 0 |
def replace_with_load_state(
init_state: Any,
load_state: Any,
load_rename_rules: Optional[list[tuple[str, str]]] = None,
load_exclude_rules: Optional[list[str]] = None,
mesh_config: tuple = (1, 1),
) -> Any:
flatten_load, _ = jax.tree_util.tree_flatten_with_path(load_state)
flatten_init, st... | 0 |
def drop_column(self, table_name, column_name, **kw):
"""Issue a "drop column" instruction using the current
migration context.
e.g.::
drop_column('organization', 'account_id')
:param table_name: name of table
:param column_name: name of column
:param s... | 1 |
def __init__(self, ble, name='mpy-uart', rxbuf=100):
self._ble = ble
self._ble.active(True)
self._ble.irq(handler=self._irq)
((self._tx_handle, self._rx_handle,),) = self._ble.gatts_register_services((_UART_SERVICE,))
# Increase the size of the rx buffer and enable append mod... | 1 |
def main(args):
all_task = [executor.submit(single_job, utt) for utt in utt2wav.keys()]
utt2speech_token = {}
for future in tqdm(as_completed(all_task)):
utt, speech_token = future.result()
utt2speech_token[utt] = speech_token
torch.save(utt2speech_token, '{}/utt2speech_token.pt'.format(... | 0 |
def __iadd__(self, other):
return self + other | 1 |
def yolov10_inference_for_examples(image, model_path, image_size, conf_threshold):
annotated_image, _ = yolov10_inference(image, None, model_path, image_size, conf_threshold)
return annotated_image | 0 |
def _tokenize_text_segment(self, text: str, speaker: int) -> Tuple[torch.Tensor, torch.Tensor]:
frame_tokens = []
frame_masks = []
text_tokens = self._text_tokenizer.encode(f"[{speaker}]{text}")
text_frame = torch.zeros(len(text_tokens), 33).long()
text_frame_mask = torch.ze... | 0 |
def import_model(
in_path: Path,
out_path: Path,
weights_per_step_schedule: list[int] | None = None,
silent: bool = False,
max_out_n_q: int | None = None,
) -> None:
if in_path.suffix == ".safetensors":
tch_model = load_file(in_path)
else:
pkg = torch.load(in_path, map_locati... | 0 |
def __init__(
self,
model="F5TTS_v1_Base",
ckpt_file="",
vocab_file="",
ode_method="euler",
use_ema=True,
vocoder_local_path=None,
device=None,
hf_cache_dir=None,
):
model_cfg = OmegaConf.load(str(files("f5_tts").joinpath(f"configs/... | 0 |
def validate_email(email: str) -> bool:
"""Validate the format of an email address."""
return bool(ConfigValidator.EMAIL_REGEX.match(email)) | 0 |
def irq(self, handler):
self._handler = handler | 1 |
def testGetChannelIndex(self):
data_formats = ['NHWC', 'NCHW']
for data_format in data_formats:
index = nasnet_utils.get_channel_index(data_format)
correct_index = 3 if data_format == 'NHWC' else 1
self.assertEqual(index, correct_index) | 1 |
def finalize_options(self):
"""Set final values for all the options that this command supports.
This is always called as late as possible, ie. after any option
assignments from the command-line or from other commands have been
done. Thus, this is the place to code option dependenci... | 1 |
def __init__(self, api, *args, **kwargs):
self.api = api
self.as_generator = kwargs.pop("as_generator", False)
self.return_json = kwargs.pop("return_json", True)
self.parameters = {}
self._build_parameters(args, kwargs)
self._build_path() | 1 |
def default_value(self):
"""Returns the default value of the platform parameter.
Returns:
*. The default value of the platform parameter.
"""
return self._default_value | 1 |
def test_add_metrics_individual_params(tracer):
"""Test adding metrics using individual parameters"""
tracer.trace = {} # Initialize trace
tracer.add_metrics(
name="test_metric",
score=0.95,
reasoning="Good performance",
cost=0.01,
latency=100,
metadata={"key... | 0 |
def __repr__(self):
return f"Flake8Error(filename={self.filename}, line_number={self.line_number}, col_number={self.col_number}, problem={self.problem})" | 0 |
def parse_nms_url(url):
"""Parse NMS url into normalized parts like scheme, user, host and others.
Example NMS URL:
auto://admin:nexenta@192.168.1.1:2000/
NMS URL parts:
.. code-block:: none
auto True if url starts with auto://, protocol
wil... | 1 |
def load_video_dir(root, dirs, save_dir, save_name):
videos, sparse_videos = [], []
first_videos = []
for idx, cdir in enumerate(dirs):
annot_path = osp.join(root, cdir, 'annot')
frame_path = osp.join(root, cdir, 'extraction')
all_frames = glob.glob( osp.join(frame_path, '*.png') )
all_annots = gl... | 1 |
def _get_browser_options(self, user_agent=None):
"""获取浏览器配置"""
co = ChromiumOptions()
try:
extension_path = self._get_extension_path("turnstilePatch")
co.add_extension(extension_path)
except FileNotFoundError as e:
logging.warning(f"警告: {e}")
... | 0 |
def collect_all_content(self):
"""
Collects all content from the current node and its descendants.
Returns:
Set[int]: A set containing all content from the current node and its descendants.
"""
all_content = set(self.content)
for child in self.children:
... | 0 |
def test_swap_volume(self):
self._test_compute_api('swap_volume', 'cast',
instance=self.fake_instance_obj, old_volume_id='oldid',
new_volume_id='newid', new_attachment_id=uuids.attachment_id,
version='5.0') | 1 |
def register_post():
return {} | 1 |
def value(self, t):
"""Generates the value given a timestep (based on schedule's logic).
Args:
t (int): The time step. This could be a tf.Tensor.
Returns:
any: The calculated value depending on the schedule and `t`.
"""
if self.framework in ["tf2", "... | 1 |
def save_keys_to_config(cfg_key, value):
value = value.replace(" ", "")
if value:
config.app[cfg_key] = value.split(",") | 0 |
def write_to_file_safe(file_name, data):
# 获取锁
with lock:
with open(file_name, 'a') as f:
f.write(data + '\n') | 0 |
def __init__(self, num_capsules, num_route_nodes, in_channels, out_channels, kernel_size=None, stride=None,
num_iterations=p.NUM_ROUTING_ITERATIONS, use_cuda=False):
super(CapsuleLayer, self).__init__()
self.num_route_nodes = num_route_nodes
self.num_iterations = num_iterat... | 1 |
def release_editing_lock(self, tid):
"""
Release the editing lock on a task. The caller is trusted to have the
lock and no verification is made.
"""
c = self.connection.cursor()
c.execute("""
UPDATE Task
SET editing = 0
WHERE id = ?
""", (tid,))
self.connection.commit() | 1 |
def mock_playwright():
with patch("playwright.async_api.async_playwright") as mock:
mock_pw = MockPlaywright()
mock_browser = MockBrowser()
mock_context = MockContext()
mock_page = MockPage()
mock_pw.chromium.launch.return_value = mock_browser
mock_pw.firefox.launch.... | 0 |
def render_token(t: bytes) -> str:
# pretty print a token, escaping control characters
s = t.decode('utf-8', errors='replace')
s = replace_control_characters(s)
return s | 0 |
def _RefIdGrad(_, grad):
return grad | 1 |
def _repr_(self):
r"""
EXAMPLES ::
sage: NilCoxeterAlgebra(WeylGroup(['A',3,1])) # indirect doctest
The Nil-Coxeter Algebra of Type A3~ over Rational Field
"""
return "The Nil-Coxeter Algebra of Type %s over %s"%(self._cartan_type._repr_(compact=True), self... | 1 |
def end_process():
stream.input_queue.push('end') | 0 |
def __init__(self, *args, **kwargs):
if kwargs.get('empty_permitted', True):
kwargs['use_required_attribute'] = False
super(PricingForm, self).__init__(*args, **kwargs)
# Setup initial values for billing_cycle and billing_dt_select
# in order to have empty values for extr... | 1 |
def test_with_ca(self, tmpdir):
ca = certs.CertStore.from_store(str(tmpdir), "test", 2048)
r = certs.dummy_cert(
ca.default_privatekey,
ca.default_ca,
b"foo.com",
[b"one.com", b"two.com", b"*.three.com", b"127.0.0.1"],
b"Foo Ltd."
)... | 1 |
def pytest_recording_configure(config: Any, vcr: VCR):
from . import json_body_serializer
vcr.register_serializer('yaml', json_body_serializer)
def method_matcher(r1: vcr_request.Request, r2: vcr_request.Request) -> None:
if r1.method.upper() != r2.method.upper():
raise AssertionError(... | 0 |
def __init__(self, family, address):
BaseTestHandler.__init__(self)
self.create_socket(family)
self.connect(address) | 1 |
def get_VerificationStatus(self):
return self.get_query_params().get('VerificationStatus') | 1 |
def get_asr_converter():
"""Create a DocumentConverter configured for ASR with whisper_turbo model."""
pipeline_options = AsrPipelineOptions()
pipeline_options.asr_options = asr_model_specs.WHISPER_TINY
converter = DocumentConverter(
format_options={
InputFormat.AUDIO: AudioFormatOp... | 0 |
def public(request: Request):
root_url = gr.route_utils.get_root_url(request, "/", None)
return RedirectResponse(url=f"{root_url}/app/") | 0 |
def test_best_model2_alignment(self):
# arrange
sentence_pair = AlignedSent(
TestIBMModel.__TEST_TRG_SENTENCE, TestIBMModel.__TEST_SRC_SENTENCE
)
# None and 'bien' have zero fertility
translation_table = {
'i': {"j'": 0.9, 'aime': 0.05, 'bien': 0.02, '... | 1 |
def run_test(self):
self.log.info('prepare some coins for multiple *rawtransaction commands')
self.nodes[2].generate(1)
self.sync_all()
self.nodes[0].generate(101)
self.sync_all()
self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(),1.5)
self.nodes[0].sen... | 1 |
def run_command_mock(mocker: MockerFixture) -> AsyncMock:
"""Patch ``gitingest.clone.run_command`` with an ``AsyncMock``.
The mocked function returns a dummy process whose ``communicate`` method yields generic
``stdout`` / ``stderr`` bytes. Tests can still access / tweak the mock via the fixture argument.
... | 0 |
def callback(d):
preview = d['denoised']
preview = vae_decode_fake(preview)
preview = (preview * 255.0).detach().cpu().numpy().clip(0, 255).astype(np.uint8)
preview = einops.rearrange(preview, 'b c t h w -> (b h) (t w) c')
if ... | 0 |
def partition(self):
return Partition(label=self.label, files=self.files) | 1 |
def _load_template(self, template):
pyboleto_dir = os.path.dirname(os.path.abspath(__file__))
template_path = os.path.join(pyboleto_dir, 'templates', template)
with open(template_path, 'r') as tpl:
template_content = tpl.read()
return template_content | 1 |
def extract_index(s):
return int(index_capture.match(s).groups()[0]) | 1 |
def __matmul__(self, other):
if isinstance(other, Quaternion):
return self.q.__matmul__(other.q)
return self.__matmul__(self.__class__(other)) | 1 |
def fetch_order_book(self, symbol, limit=None, params={}):
self.load_markets()
market = self.market(symbol)
method = 'publicGet'
request = {
'symbol': market['id'],
}
if limit is not None:
request['size'] = limit
if market['future']:
... | 1 |
def test_calculate_samples_count_per_callchain(self):
counters = ipr.calculate_samples_count_per_callchain([
["foo", "BytecodeHandler:bar"],
["foo", "BytecodeHandler:bar"],
["beep", "BytecodeHandler:bar"],
])
self.assertItemsEqual(counters, [
('BytecodeHandler:bar;foo', 2),
(... | 1 |
def __len__(self):
"""
Return the number of fields in the dataclass.
Returns:
int: The number of fields in the dataclass.
"""
return len(fields(self)) | 0 |
def _factory(branches: list[str]) -> None:
mocker.patch(
"gitingest.utils.git_utils.run_command",
new_callable=AsyncMock,
return_value=("\n".join(f"refs/heads/{b}" for b in branches).encode() + b"\n", b""),
)
mocker.patch(
"gitingest.utils.git_... | 0 |
def test_iterqueue(self, n='test_iterqueue'):
i = [0]
class C(compat.Consumer):
def fetch(self, limit=None):
z = i[0]
i[0] += 1
return z
c = C(self.connection,
queue=n, exchange=n, routing_key='rkey')
assert... | 1 |
def __imul__(self, other):
return self * other | 1 |
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Dataset
Overview
The Python Function Benchmark serves as a real-world evaluation dataset for membership inference attacks on code LLMs, specifically targeting models pretrained on datasets like the Pile (e.g., Pythia, GPT-Neo, StableLM).
The dataset contains training (member) data and non-training (non-member):
Member data includes 1,000 Python functions sampled from the Pile dataset (released in 2021). To ensure a diverse sample, we systematically selected the first 10 functions from every 100 consecutive entries in the Pile, resulting in a total of 1,000 member functions.
Non-member data includes 1,000 Python functions extracted from 100 GitHub repositories created after January 1, 2024 (all four evaluated LLMs had been released prior to this date). To ensure repository quality, we sorted repositories by star count in descending order and extracted 10 Python functions from each repository in order. To verify that these functions were genuinely original and not cloned from pre-existing sources, we implemented a rigorous verification process: we parsed each candidate function's code using Python's
astmodule to extract its name, variable names, and function calls, then used these elements to build search queries for the GitHub API. The verification employed three heuristics: (1) searching for the exact function name to identify direct duplicates; (2) searching by internal variable names to detect refactored code reuse; and (3) searching for the complete string of function calls to find logic similarities. Two authors conducted peer reviews on the search results to ensure all 1,000 functions were original and created after January 2024.
The benchmark includes 214 non-member function files (some repositories contributed multiple files) with an average of 25.34 lines of code (LOC). For member functions, file counts are unavailable as this information was not provided in the Pile dataset.
The benchmark supports evaluation under varied member-to-non-member ratios (e.g., 1:1, 1:5, 5:1) and includes statistics on syntax conventions (e.g., 38.4% of tokens are syntax-related across categories like data models and expressions).
If you find this work helpful, please consider citing our paper:
@inproceedings{li2025synprune,
title={Uncovering Pretraining Code in LLMs: A Syntax-Aware Attribution Approach},
author={Yuanheng Li and Zhuoyang Chen and Xiaoyun Liu and Yuhao Wang and Mingwei Liu and Yang Shi and Kaifeng Huang and Shengjie Zhao},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2026},
}
divide.py
divide.py is a script designed to split a JSONL file into two separate files based on the approximate token count of a specified text field. It detects the appropriate text field from the input JSONL and uses the median token count as a threshold to categorize the entries into "short" and "long".
Usage
To use divide.py, run the following command in your terminal:
python divide.py --input <input_jsonl_path> --short_out <output_short_jsonl_path> --long_out <output_long_jsonl_path>
--input: Path to the input JSONL file (required).--short_out: Path to the output JSONL file for short entries (default:short.jsonl).--long_out: Path to the output JSONL file for long entries (default:long.jsonl).
ratio.py
ratio.py is a script that creates datasets with specified positive and negative sample ratios from two JSONL files containing positive and negative samples. It randomly samples from the provided datasets to create a new dataset based on the defined configuration.
Usage
To use ratio.py, simply run the script:
python ratio.py
This script will read from positive/positive.jsonl and negative/negative.jsonl, and create datasets based on the configurations defined in the script. The output files will be named dataset_{name}.jsonl for each configuration.
Dataset Configurations
The following configurations are available in the script:
1_1: 2000 total samples with a 1:1 positive to negative ratio.1_5: 1200 total samples with a 1:5 positive to negative ratio.5_1: 1200 total samples with a 5:1 positive to negative ratio.
extract_members.py
extract_members.py is a script that extracts members and non-members from a JSONL file based on the label field. It reads from python_sample.jsonl, where a label of 1 indicates a member and a label of 0 indicates a non-member. The script outputs two separate JSONL files: one for members and one for non-members.
Usage
To use extract_members.py, run the following command in your terminal:
python extract_members.py
This script will read from dataset/python_sample.jsonl and create the following output files:
dataset/member.jsonl: Contains all entries withlabelequal to1.dataset/non-member.jsonl: Contains all entries withlabelequal to0.
Output
After running the script, you will see a message indicating the number of extracted members and non-members.
- Downloads last month
- 955