积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部后端开发(556)Python(556)Django(93)PyWebIO(86)Jupyter(62)Scrapy(62)Celery(51)Tornado(20)Conda(17)ORM(16)

语言

全部英语(463)中文(简体)(88)法语(2)中文(繁体)(1)英语(1)

格式

全部PDF文档 PDF(314)其他文档 其他(241)DOC文档 DOC(1)
 
本次搜索耗时 0.094 秒,为您找到相关结果约 556 个.
  • 全部
  • 后端开发
  • Python
  • Django
  • PyWebIO
  • Jupyter
  • Scrapy
  • Celery
  • Tornado
  • Conda
  • ORM
  • 全部
  • 英语
  • 中文(简体)
  • 法语
  • 中文(繁体)
  • 英语
  • 全部
  • PDF文档 PDF
  • 其他文档 其他
  • DOC文档 DOC
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 PyMuPDF 1.24.2 Documentation

    open("test.pdf") doc = fitz.open() # empty output PDF for spage in src: # for each page in input r = spage.rect # input page rectangle d = fitz.Rect(spage.cropbox_position, # CropBox displacement if ---------------- r1 = r / 2 # top left rect r2 = r1 + (r1.width, 0, r1.width, 0) # top right rect r3 = r1 + (0, r1.height, 0, r1.height) # bottom left rect r4 = fitz.Rect(r1.br, r.br) # bottom right right rect rect_list = [r1, r2, r3, r4] # put them in a list for rx in rect_list: # run thru rect list rx += d # add the CropBox displacement page = doc.new_page(-1, # new output page with rx dimensions
    0 码力 | 565 页 | 6.84 MB | 1 年前
    3
  • pdf文档 Scrapy 0.16 Documentation

    options: • --callback or -c: spider method to use as callback for parsing the response • --rules or -r: use CrawlSpider rules to discover the callback (ie. spider method) to use for parsing the response HtmlXPathSelector(response) item = Item() item['id'] = hxs.select('//td[@id="item_id"]/text()').re(r'ID: (\d+)') item['name'] = hxs.select('//td[@id="item_name"]/text()').extract() item['description'] = ['id', 'name', 'description'] def parse_row(self, response, row): log.msg('Hi, this is a row!: %r' % row) item = TestItem() item['id'] = row['id'] item['name'] = row['name'] item['description'] = row['description']
    0 码力 | 203 页 | 931.99 KB | 1 年前
    3
  • epub文档 Scrapy 0.16 Documentation

    Supported options: --callback or -c: spider method to use as callback for parsing the response --rules or -r: use CrawlSpider rules to discover the callback (ie. spider method) to use for parsing the response HtmlXPathSelector(response) item = Item() item['id'] = hxs.select('//td[@id="item_id"]/text()').re(r'ID: (\d+)') item['name'] = hxs.select('//td[@id="item_name"]/text()').extract() 'name', 'description'] def parse_row(self, response, row): log.msg('Hi, this is a row!: %r' % row) item = TestItem() item['id'] = row['id'] item['name'] = row['name']
    0 码力 | 272 页 | 522.10 KB | 1 年前
    3
  • pdf文档 Scrapy 0.18 Documentation

    options: • --callback or -c: spider method to use as callback for parsing the response • --rules or -r: use CrawlSpider rules to discover the callback (ie. spider method) to use for parsing the response HtmlXPathSelector(response) item = Item() item['id'] = hxs.select('//td[@id="item_id"]/text()').re(r'ID: (\d+)') item['name'] = hxs.select('//td[@id="item_name"]/text()').extract() item['description'] = ['id', 'name', 'description'] def parse_row(self, response, row): log.msg('Hi, this is a row!: %r' % row) item = TestItem() item['id'] = row['id'] item['name'] = row['name'] item['description'] = row['description']
    0 码力 | 201 页 | 929.55 KB | 1 年前
    3
  • pdf文档 Scrapy 0.22 Documentation

    options: • --callback or -c: spider method to use as callback for parsing the response • --rules or -r: use CrawlSpider rules to discover the callback (ie. spider method) to use for parsing the response url) sel = Selector(response) item = Item() item[’id’] = sel.xpath(’//td[@id="item_id"]/text()’).re(r’ID: (\d+)’) item[’name’] = sel.xpath(’//td[@id="item_name"]/text()’).extract() item[’description’] = = [’id’, ’name’, ’description’] def parse_row(self, response, row): log.msg(’Hi, this is a row!: %r’ % row) item = TestItem() item[’id’] = row[’id’] item[’name’] = row[’name’] item[’description’] = row[’description’]
    0 码力 | 199 页 | 926.97 KB | 1 年前
    3
  • pdf文档 Scrapy 0.20 Documentation

    options: • --callback or -c: spider method to use as callback for parsing the response • --rules or -r: use CrawlSpider rules to discover the callback (ie. spider method) to use for parsing the response url) sel = Selector(response) item = Item() item[’id’] = sel.xpath(’//td[@id="item_id"]/text()’).re(r’ID: (\d+)’) item[’name’] = sel.xpath(’//td[@id="item_name"]/text()’).extract() item[’description’] = = [’id’, ’name’, ’description’] def parse_row(self, response, row): log.msg(’Hi, this is a row!: %r’ % row) item = TestItem() item[’id’] = row[’id’] item[’name’] = row[’name’] item[’description’] = row[’description’]
    0 码力 | 197 页 | 917.28 KB | 1 年前
    3
  • pdf文档 PyMuPDF 1.12.2 documentation

    shifted on writing a new modern graphics library called ``Fitz``. Fitz was originally intended as an R&D project to replace the aging Ghostscript graphics library, but has instead become the rendering # update info dict r = annot.rect # take annot rect r.x1 = r.x0 + r.width * 1.2 # new location has same top-left r.y1 = r.y0 + r.height * 1.2 # but but 20% longer sides annot.setRect(r) # update rectangle annot.updateImage() # update appearance doc.save("circle-out.pdf", garbage=4) # save This is how the circle
    0 码力 | 387 页 | 2.70 MB | 1 年前
    3
  • epub文档 Scrapy 0.20 Documentation

    Supported options: --callback or -c: spider method to use as callback for parsing the response --rules or -r: use CrawlSpider rules to discover the callback (ie. spider method) to use for parsing the response Selector(response) item = Item() item['id'] = sel.xpath('//td[@id="item_id"]/text()').re(r'ID: (\d+)') item['name'] = sel.xpath('//td[@id="item_name"]/text()').extract() item['description'] 'name', 'description'] def parse_row(self, response, row): log.msg('Hi, this is a row!: %r' % row) item = TestItem() item['id'] = row['id'] item['name'] = row['name']
    0 码力 | 276 页 | 564.53 KB | 1 年前
    3
  • epub文档 Scrapy 0.18 Documentation

    Supported options: --callback or -c: spider method to use as callback for parsing the response --rules or -r: use CrawlSpider rules to discover the callback (ie. spider method) to use for parsing the response HtmlXPathSelector(response) item = Item() item['id'] = hxs.select('//td[@id="item_id"]/text()').re(r'ID: (\d+)') item['name'] = hxs.select('//td[@id="item_name"]/text()').extract() 'name', 'description'] def parse_row(self, response, row): log.msg('Hi, this is a row!: %r' % row) item = TestItem() item['id'] = row['id'] item['name'] = row['name']
    0 码力 | 273 页 | 523.49 KB | 1 年前
    3
  • epub文档 Scrapy 0.22 Documentation

    Supported options: --callback or -c: spider method to use as callback for parsing the response --rules or -r: use CrawlSpider rules to discover the callback (ie. spider method) to use for parsing the response Selector(response) item = Item() item['id'] = sel.xpath('//td[@id="item_id"]/text()').re(r'ID: (\d+)') item['name'] = sel.xpath('//td[@id="item_name"]/text()').extract() item['description'] 'name', 'description'] def parse_row(self, response, row): log.msg('Hi, this is a row!: %r' % row) item = TestItem() item['id'] = row['id'] item['name'] = row['name']
    0 码力 | 303 页 | 566.66 KB | 1 年前
    3
共 556 条
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 56
前往
页
相关搜索词
PyMuPDF1.24DocumentationScrapy0.16Documentation0.180.220.201.12documentation
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩