pascal 数据集_图像数据集imagenet pascal tinyimage esp和labelme它们提供什么-程序员宅基地

技术标签: python  机器学习  计算机视觉  深度学习  人工智能  

pascal 数据集

Research on image-based search, object detection, object recognition, classification, and related topics depend on a comprehensive and representative dataset. Such a dataset should have properties such as, having images from large number of categories, lots of images from each category. Additionally, images in the dataset should closely approximate real-life scenarios such as cluttered backgrounds, images taken from multiple angles and lighting conditions. Furthermore, the dataset needs to provide appropriate labels. It is desirable that labels are hierarchical in nature. Having bounding boxes around the labeled objects would be an icing on a cake. Considerable research has been done on this topic. Researchers have proposed several different types datasets. Here is a bird’s eye view on what these datasets offer.

基于图像的搜索,目标检测,物体识别,分类和相关的主题- [R esearch取决于全面和有代表性的数据集。 这样的数据集应具有以下属性,例如,具有来自大量类别的图像,来自每个类别的大量图像。 此外,数据集中的图像应紧密逼近真实场景,例如背景杂乱,从多个角度拍摄的图像以及光照条件。 此外,数据集需要提供适当的标签。 理想的是标签本质上是分层的。 在标记的对象周围有包围盒将是锦上添花。 关于这个主题已经进行了大量研究。 研究人员提出了几种不同类型的数据集。 这是这些数据集提供的鸟瞰图。

This article explores some of the key challenges in creating a comprehensive image dataset. It also discusses some of the key works on the topic of image dataset creation, such as — ImageNet, PASCAL, TinyImage, ESP and LabelMe.

本文探讨了创建全面图像数据集时的一些关键挑战。 它还讨论了有关图像数据集创建的一些关键作品,例如ImageNet,PASCAL,TinyImage,ESP和LabelMe。

创建综合图像数据集的主要挑战 (Key Challenges in Creating Comprehensive Image Datasets)

Getting access to images for creating the image dataset is not a problem. Ubiquitous presence of smartphones have made it extremely easy to take pictures, and people around the world are taking a lot of pictures. Rough estimates could be a few billion new images taken every day. Additionally, social networking sites have made it super easy to share images, and most prominent search engines provide way to do image search. All this makes it very easy to get a lot of images. However, the problem is to how to label those images accurately, economically and scalable.

可以访问用于创建图像数据集的图像不是问题。 智能手机无处不在使拍照变得非常容易,全世界的人们都在拍照。 粗略估计可能是每天拍摄数十亿张新图像。 此外,社交网站使共享图像变得超级容易,并且大多数著名的搜索引擎都提供了进行图像搜索的方法。 所有这些使获取大量图像变得非常容易。 但是,问题在于如何准确,经济和可扩展地标记这些图像。

成本 (Cost)

Lets us do some quick calculations to understand the scale of the problem. Say we wanted to created an image dataset, consisting of around one billion images. Let’s also assume that we found a human(s) willing to annotate images at the very low rate of 1 cents per 100 images annotated. So for a single human worker we are looking at around 100, 000 dollars. However, no respectable dataset will have images labeled by only a single human, so we will need to get the same images labeled by an average of 5 humans, that brings the cost of dataset creation close to half-a-million dollars. Although it is not infeasible get funding of that quantum for the purposes of image labeling, it may be a difficult sell to funding agencies. All of this is assuming that the image has single object. Assume we wanted to label multiple objects in the images or provide bounding boxes around the labeled objects, the cost of labeling will no doubt go up. So it is obvious that to make truly large datasets one has to come up with automated ways of image gathering and labeling and using only human effort for quality control purposes.

让我们进行一些快速计算以了解问题的严重性。 假设我们要创建一个图像数据集,其中包含约十亿张图像 。 我们还假设我们发现有人愿意以每100张带有注释的图像1美分的极低速率进行注释。 因此,对于一个人类工人来说,我们要支付的价格约为100,000美元。 但是,没有任何可观的数据集具有仅由一个人标记的图像,因此我们将需要获取平均由5个人标记的相同图像,这使数据集创建的成本接近50万美元。 尽管为图像标记目的而获得该量子资金并不是不可行的,但对资助机构而言可能很难出售。 所有这些都假定图像具有单个对象。 假设我们要标记图像中的多个对象或在被标记的对象周围提供边框,则标记的成本无疑会增加。 因此,很明显,要制作真正的大型数据集,必须想出一种自动的图像收集和标记方法,并且仅使用人工来进行质量控制。

标签层次结构 (Label Hierarchy)

Male Italian Sparrow (Passer italiae). Montecatini Terme, Tuscany.
Italian Sparrow ( 意大利麻雀 ( Passer italiae). Montecatini Terme, Tuscany. Passer italiae )。 蒙特卡蒂尼-泰尔梅,托斯卡纳。

How would you classify the above image? a bird, a bird sitting on a branch, an Italian Sparrow sitting on a tree branch. All of those are correct labels for the above image (Wikipedia labels the image as “Male Italian Sparrow (Passer italiae). Montecatini Terme, Tuscany.”)

您如何对以上图像进行分类? 一只鸟,一只鸟坐在树枝上,一只意大利麻雀坐在树枝上。 所有这些都是以上图片的正确标签( Wikipedia将该图片标签为“ Male Italian Sparrow(Passer italiae)。蒙特卡蒂尼-泰尔梅,托斯卡纳”。)

This highlights two important points. First, there is some hierarchy to be had in labels and second the human label-er may generalize the labels unless other wise specified. Let’s look at image hierarchy first.

这突出了两个要点。 首先,标签中有一些层次结构,其次,除非另有说明,否则人类标签制作者可以对标签进行概括。 首先让我们看一下图像层次结构。

Natural images are seldom one single subject on a white (or monotonic) background. Additionally, there are different types of hierarchies associated with the image. First, there is hierarchy labeling of the object itself, such as the above image could be labeled as a living thing — bird — sparrow. Second, their is the hierarchy of the objects in the images such as tree — branch — bird. Having a dataset with hierarchical labeling is important to understand the context of the image.

自然图像很少是白色(或单调)背景上的单个主题。 此外,还有与图像关联的不同类型的层次结构。 首先,对象本身具有层次结构标记,例如上面的图像可以被标记为生物-鸟类-麻雀。 其次,它们是图像中对象的层次结构,例如树-分支-鸟。 具有带有分层标签的数据集对于理解图像的上下文很重要。

Now lets look at the hierarchical aspect in the labeling process itself. Existing works [1] have demonstrated that humans tend to label visual objects at an easier level of abstraction, so for instance more humans are likely to label the above image as a “bird”, than “Male Italian Sparrow”. In order to make a highly detailed dataset, it is important to ask detailed questions to the label-er. A naive way of doing so may be to ask the label-er, whether object x is present in image ? — i.e. a yes/no question. However, an obvious problem to such an approach is scalability. Imagine a scenario our vocabulary as has a paltry 1000 objects (nouns), so to annotate each image in dataset, the label-er must answer 1000 questions. This seems too wasteful. The work in [2] proposes a annotation approach that might helpful in solving the scalability problem for multi-level annotation. They suggest that labels in a subset (of objects) are often correlated, and often occur together. For example, labels like computer, keyboard, mouse, etc often occur together in certain types of context (a computer is more likely to be present in an office environment, than in a jungle). Therefore, by dynamically crafting the questions based on the label-er’s previous answers it is possible to reduce the number of questions that to be asked (as compared the naive approach).

现在让我们看一下标记过程本身的层次结构方面。 现有的作品[ 1 ]已经证明,人类倾向于以更容易的抽象水平标记视觉对象,因此比起“雄性意大利麻雀”,更多的人类可能将上述图像标记为“鸟”。 为了制作高度详细的数据集,向标签制作者询问详细的问题非常重要。 这样做的天真的方法可能是问标签器,图像中是否存在对象x? -即是/否的问题。 但是,这种方法的一个明显问题是可伸缩性。 想象一个场景,我们的词汇表只有很少的1000个对象(名词),因此要注释数据集中的每个图像,标注者必须回答1000个问题。 这似乎太浪费了。 [ 2 ]中的工作提出了一种注释方法,该方法可能有助于解决多级注释的可伸缩性问题。 他们建议(对象的)子集中的标签通常是相关的,并且经常一起出现。 例如,诸如计算机,键盘,鼠标等之类的标签通常在某些类型的上下文中同时出现(与丛林相比,计算机更可能出现在办公环境中)。 因此,通过根据标注者的先前答案动态地制作问题,可以减少要提出的问题数量(与朴素的方法相比)。

画面质量 (Image Quality)

This is the most obvious property of images that comes to mind in context of creating image datasets. However, different works on this topic take widely different stances on this matter. For instance, TinyImage [3] strives to makes their dataset with smallest possible colors images (32 x 32), whereas, others such as ImageNet [4] try to go the other way by using higher resolution images.

在创建图像数据集的上下文中,这是图像最明显的属性。 但是,关于该主题的不同著作在此问题上采取了截然不同的立场。 例如,TinyImage [ 3 ]努力使它们的数据集具有尽可能小的彩色图像(32 x 32),而其他诸如ImageNet [ 4 ]则尝试通过使用更高分辨率的图像进行其他选择。

There is many more ways of looking at image quality, from the perspective of machine learning dataset. For instance, how representative the image is of the real world. The PASCAL challenge from 2008 [5] focuses on recognizing objects in realistic scenes. Realistic scenes often are cluttered with multiple subjects in focus, have diverse backgrounds and may be taken at different camera angles.

从机器学习数据集的角度来看,还有更多查看图像质量的方法。 例如,图像在现实世界中的代表性如何。 2008年的PASCAL挑战[ 5 ]着重于识别现实场景中的物体。 逼真的场景通常杂乱无章,有多个主体聚焦,背景各异,并且可能以不同的相机角度拍摄。

My take on the matter would be that higher resolution images are always better than smaller resolution images, as you always go from high to low resolution but not the other way around. On the topic of image-complexity, the ultimate goal should be to have an algorithm works on realistic images, for that purpose we would need dataset that has images with cluttered background, all sorts of camera angles and lighting conditions. However, if an algorithm isn’t performing well on simple, uncluttered, single-subject images, then what is the point of testing that algorithms on more realistic datasets. So in my opinion both realistic and simplistic dataset are relevant.

我的看法是,高分辨率图像总是比较小分辨率的图像更好,因为您总是从高分辨率转到低分辨率,但反之则不行。 在图像复杂性这一主题上,最终目标应该是使算法适用于真实图像,为此,我们需要具有背景杂乱,各种相机角度和照明条件的图像的数据集。 但是,如果算法在简单,整洁的单对象图像上效果不佳,那么在更真实的数据集上测试该算法的重点是什么。 因此,我认为现实数据集和简化数据集都相关。

注释级别详细信息 (Level of Annotation Details)

Annotation or labels on images play a key part in the learning process. Simple captioning of images can be very easily found, most social networking sites encourage their users to provide a caption for their image. While one line captions are good for single-subject, highly-segment images, real-life images do not allow for that simplification. Ideally, a good dataset will not only identify all the subjects in the image, but, it will also tell where exactly in the images those objects reside (by means of drawing polygons around those objects). Works like LabelMe [6] do some good work on that front.

图像上的注释或标签在学习过程中起关键作用。 图像的简单字幕很容易找到,大多数社交网站都鼓励其用户为其图像提供字幕。 虽然单行字幕适合单主题,高细分图像,但现实生活中的图像却无法简化。 理想情况下,一个好的数据集不仅可以识别图像中的所有主题,而且还可以分辨出这些对象在图像中的确切位置(通过在这些对象周围绘制多边形)。 像LabelMe [ 6 ]这样的作品在这方面做得很好。

数据集创建和评估方法 (Dataset Creation & Evaluation Methodologies)

Data Collection: A number of works (TinyImage[3], ImageNet [4]) rely on search engines to find images. They base their search on words derived from WordNet [7]. WordNet is a lexical database that captures semantic relationship between words in several languages, including English. For instance, TinyImage[3] states that they “selected 7 independent image search engines: Altavista, Ask, Flickr, Cydral, Google, Picsearch and Webshots (others have outputs correlated with these). We automatically download all the images provided by each engine for all 75,846 non-abstract nouns. Running over 8 months, this method gathered 97,245,098 images in total”. ImageNet [4] states somethings similar “We collect candidate images from the Internet by querying several image search engines. For each synset, the queries are the set of WordNet synonyms. Search engines typically limit the number of images retrievable (in the order of a few hundred to a thousand). To obtain as many images as possible, we expand the query set by appending the queries with the word from parent synsets, if the same word appears in the gloss of the target synset. For example, when querying “whippet”, according to WordNet’s gloss a “small slender dog of greyhound type developed in England”, we also use ‘whippet dog’ and ‘whippet greyhound’.” Other tricks such as using language translations are used to further enhance the dataset.

数据收集 :许多作品(TinyImage [ 3 ],ImageNet [ 4 ])都依赖搜索引擎来查找图像。 他们基于WordNet [ 7 ]衍生的单词进行搜索。 WordNet是一个词汇数据库,可捕获多种语言(包括英语)中单词之间的语义关系。 例如,TinyImage [ 3 ]表示他们“ 选择了7个独立的图像搜索引擎:Altavista,Ask,Flickr,Cydral,Google,Picsearch和Webshots(其他的输出与此相关)。 我们会自动下载每个引擎为所有75,846个非抽象名词提供的所有图像。 该方法运行了8个月,共收集了97,245,098张图像 。 ImageNet [ 4 ]表示类似的情况“ 我们通过查询几个图像搜索引擎从Internet收集候选图像。 对于每个同义词集,查询是WordNet同义词的集合。 搜索引擎通常会限制可检索图像的数量(大约数百到一千)。 为了获得尽可能多的图像,如果在目标同义词集的修饰语中出现相同的单词,则通过在查询中附加来自父同义词集的单词来扩展查询集。 例如,当查询“ whippet”时,根据WordNet的表述,“英国开发的小型灵缇犬细长犬”,我们也使用“ whippet dog”和“ whippet greyhound”。 其他技巧,例如使用语言翻译,可用于进一步增强数据集。

Data Cleaning: Some works like ImageNet [4] have used the services of Amazon Mechanical Turk (AMT) worker to clean the images, while others like TinyImage[3] do not attempt to do cleaning, citing the size of their dataset as too large to be cleaned.

数据清理 :有些像ImageNet [ 4 ]的作品已经使用Amazon Mechanical Turk(AMT)工作人员的服务来清理图像,而另一些像TinyImage [ 3 ]则没有尝试清理,因为它们的数据集太大。被清洗。

ImageNet [4] in particular has had an interesting quality control mechanism. They used multiple AMT workers to label images, however, the interesting question was how to decide how many worker should be assigned to work on a particular image ? It is natural that some images would more difficult than others to label. It is also intuitive that assigning more workers on a single image gives us more data, but it would be slow and expensive. On the other hand lesser worker means labeling would be faster, but may compromise quality. So then there is a case for some optimization. Or more appropriately speaking, there is a need for a probabilistic algorithm to determine how many workers should be assigned to work on a image so as to get a reasonable level of accuracy for most images.

ImageNet [ 4 ]特别具有有趣的质量控制机制。 他们使用了多个AMT工作人员来标记图像,但是,有趣的问题是如何确定应指派多少工作人员处理特定图像? 当然,有些图像比其他图像更难以标记。 同样直观的是,在单个图像上分配更多的工作人员可以为我们提供更多的数据,但这将是缓慢且昂贵的。 另一方面,较少的工人意味着贴标签会更快,但可能会影响质量。 因此,有必要进行一些优化。 或更恰当地说,需要一种概率算法来确定应分配多少工人在图像上工作,以便为大多数图像获得合理的准确度。

In ImageNet [4], for each synset, they begin by randomly sampling an initial subset of images. Then at least 10 workers were asked to vote on each of these images. This data is used to obtain a confidence score table, indicating the probability of an image being a “good” image given the votes. In essence, this mini-experiment was proxy for how complex the annotation for that sysnet would be. For each remaining candidate images in this synset, they used this probability distribution to guess the number of AMT workers required to achieve a predetermined confidence level for an image.

在ImageNet [ 4 ]中,对于每个同义集,它们首先从对图像的初始子集进行随机采样开始。 然后,要求至少10名工人对这些图像进行投票。 该数据用于获得置信度得分表,该表表明给定票证图像是“好”图像的可能性。 从本质上讲,此小型实验是该sysnet注释的复杂程度的代理。 对于此同义词集中的每个其余候选图像,他们使用此概率分布来猜测实现图像的预定置信度所需的AMT工作人员数量。

Another work ESP [8] uses a game to get labels for an image. Two users are shown an image at the same time, and they have to come up with a labels for that image independently. The objective of the game is to come up with as many matching words for the image as possible. Although this game tries to address the challenge of labeling in a scalable and cost-effective way. There are few drawbacks. Key among which is abstraction of the labels proposed by the players. Consider the example where both players are shown the image of sparrow from above, they are very likely to label it a bird, owing to the time-limited nature of the game. Now the label “bird” isn’t incorrect for the image, however it misses several key pieces of information such as hierarchy, context, etc, as described in the above paragraphs.

ESP的另一项作品[ 8 ]使用游戏来获取图像的标签。 同时为两个用户显示一张图像,他们必须分别为该图像提供一个标签。 游戏的目的是为图像提出尽可能多的匹配词。 尽管此游戏试图以可扩展且经济高效的方式解决标签挑战。 没有什么缺点。 其中的关键是玩家提出的标签的抽象。 考虑一个例子,其中两个玩家都从上方看到了麻雀的图像,由于游戏的时间限制,他们很可能将它标记为鸟。 现在,标签“ bird”对于图像来说是不正确的,但是它错过了如上段所述的一些关键信息,例如层次结构,上下文等。

Annotating Images: Most dataset are content with labeling the various objects in an image. However, what is better than an accurately labeled image is an accurately label image that has annotated the location of labeled objects within the image. Works like LabelMe [6] shine here. They have created a web-based, annotation tools using which label-ers can draw polygons around objects which they are labeling. This data is immensely useful and most other datasets do not provide this information. Knowing the objects that are present in the image, as well as the objects that the user seems label-worthy is always a useful piece of information. However, some key decisions in this process are left out to the user, which is in some ways good and bad. For instance, if there is human in the image, should the worker label the eyes, ears, hand, feet separately, or should they just the tag the human as a whole. The decisions are left to user’s discretion.

注释图像 :大多数数据集都满足于标记图像中的各个对象的内容。 但是,比准确标记的图像更好的是一个精确标记的图像,该图像已注释了标记对象在图像中的位置。 像LabelMe [ 6 ]这样的作品在这里闪耀。 他们创建了一个基于Web的注释工具,标签制作者可以使用它们在要标记的对象周围绘制多边形。 此数据非常有用,大多数其他数据集均未提供此信息。 了解图像中存在的对象以及用户认为值得标签的对象始终是有用的信息。 但是,此过程中的一些关键决定留给了用户,这在某些方面是好是坏。 例如,如果图像中有人,则该工人应该分别标记眼睛,耳朵,手,脚,还是应该标记整个人。 这些决定由用户自行决定。

Measuring Completeness: Completeness is difficult to describe in context of image datasets. How many images does a dataset need ? How many subjects or categories of subjects does a dataset need ? What composition of images does a dataset needs to have before it can be said to be complete ? Let’s start by volume. The dataset with most number of images at the moment would be TinyImage[3] with around 79 million images, followed by ImageNet [4] with around 14 million images. The remaining dataset are tiny in comparison, with a few ten thousand images in each. However, if hierarchical classification of medium-to-high resolution images may be the key concern, ImageNet [4] might be top contender with images from over 21 thousand sysnets. On the other hand if concern was to get realistic, cluttered images, PASCAL [5] or LabelMe [6] might do the trick.

测量完整性 :很难在图像数据集的上下文中描述完整性。 数据集需要多少张图像? 数据集需要多少个主题或主题类别? 可以说数据集完整之前,数据集需要具有什么图像组成? 让我们从音量开始。 目前拥有最多图像数量的数据集将是TinyImage [ 3 ],具有约7900万张图像,其次是ImageNet [ 4 ],具有约1400万张图像。 相比之下,其余的数据集很小,每个数据集中只有几万张图像。 但是,如果中到高分辨率图像的分层分类可能是关键问题,那么ImageNet [ 4 ]可能是来自2万1千个sysnet图像的最有力竞争者。 另一方面,如果要获得逼真的,混乱的图像,PASCAL [ 5 ]或LabelMe [ 6 ]可能会成功。

Conclusion

结论

I was looking for a simple, unbiased summary of image dataset available currently, with some notes on pros and cons of each dataset. There were a number articles, but nothing quite like I was hoping to read. So here is a short summary that I might have liked to read. I hope it serves a useful pointer to those who are looking to get into the topic.

我正在寻找当前可用的图像数据集的简单,无偏见的摘要,其中包含有关每个数据集的利弊的一些注释。 有很多文章,但没有什么比我希望读的要好。 因此,这是我可能喜欢阅读的简短摘要。 我希望它可以为那些希望进入该主题的人提供有用的指导。

翻译自: https://medium.com/@squeakypaw01/image-datasets-imagenet-pascal-tinyimage-esp-and-labelme-what-do-they-offer-b0b1da832e60

pascal 数据集

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/weixin_26704853/article/details/108515744

智能推荐

5个超厉害的资源搜索网站,每一款都可以让你的资源满满!_最全资源搜索引擎-程序员宅基地

文章浏览阅读1.6w次,点赞8次,收藏41次。生活中我们无时不刻不都要在网站搜索资源,但就是缺少一个趁手的资源搜索网站,如果有一个比较好的资源搜索网站可以帮助我们节省一大半时间!今天小编在这里为大家分享5款超厉害的资源搜索网站,每一款都可以让你的资源丰富精彩!网盘传奇一款最有效的网盘资源搜索网站你还在为找网站里面的资源而烦恼找不到什么合适的工具而烦恼吗?这款网站传奇网站汇聚了4853w个资源,并且它每一天都会持续更新资源;..._最全资源搜索引擎

Book类的设计(Java)_6-1 book类的设计java-程序员宅基地

文章浏览阅读4.5k次,点赞5次,收藏18次。阅读测试程序,设计一个Book类。函数接口定义:class Book{}该类有 四个私有属性 分别是 书籍名称、 价格、 作者、 出版年份,以及相应的set 与get方法;该类有一个含有四个参数的构造方法,这四个参数依次是 书籍名称、 价格、 作者、 出版年份 。裁判测试程序样例:import java.util.*;public class Main { public static void main(String[] args) { List <Book>_6-1 book类的设计java

基于微信小程序的校园导航小程序设计与实现_校园导航微信小程序系统的设计与实现-程序员宅基地

文章浏览阅读613次,点赞28次,收藏27次。相比于以前的传统手工管理方式,智能化的管理方式可以大幅降低学校的运营人员成本,实现了校园导航的标准化、制度化、程序化的管理,有效地防止了校园导航的随意管理,提高了信息的处理速度和精确度,能够及时、准确地查询和修正建筑速看等信息。课题主要采用微信小程序、SpringBoot架构技术,前端以小程序页面呈现给学生,结合后台java语言使页面更加完善,后台使用MySQL数据库进行数据存储。微信小程序主要包括学生信息、校园简介、建筑速看、系统信息等功能,从而实现智能化的管理方式,提高工作效率。

有状态和无状态登录

传统上用户登陆状态会以 Session 的形式保存在服务器上,而 Session ID 则保存在前端的 Cookie 中;而使用 JWT 以后,用户的认证信息将会以 Token 的形式保存在前端,服务器不需要保存任何的用户状态,这也就是为什么 JWT 被称为无状态登陆的原因,无状态登陆最大的优势就是完美支持分布式部署,可以使用一个 Token 发送给不同的服务器,而所有的服务器都会返回同样的结果。有状态和无状态最大的区别就是服务端会不会保存客户端的信息。

九大角度全方位对比Android、iOS开发_ios 开发角度-程序员宅基地

文章浏览阅读784次。发表于10小时前| 2674次阅读| 来源TechCrunch| 19 条评论| 作者Jon EvansiOSAndroid应用开发产品编程语言JavaObjective-C摘要:即便Android市场份额已经超过80%,对于开发者来说,使用哪一个平台做开发仍然很难选择。本文从开发环境、配置、UX设计、语言、API、网络、分享、碎片化、发布等九个方面把Android和iOS_ios 开发角度

搜索引擎的发展历史

搜索引擎的发展历史可以追溯到20世纪90年代初,随着互联网的快速发展和信息量的急剧增加,人们开始感受到了获取和管理信息的挑战。这些阶段展示了搜索引擎在技术和商业模式上的不断演进,以满足用户对信息获取的不断增长的需求。

随便推点

控制对象的特性_控制对象特性-程序员宅基地

文章浏览阅读990次。对象特性是指控制对象的输出参数和输入参数之间的相互作用规律。放大系数K描述控制对象特性的静态特性参数。它的意义是:输出量的变化量和输入量的变化量之比。时间常数T当输入量发生变化后,所引起输出量变化的快慢。(动态参数) ..._控制对象特性

FRP搭建内网穿透(亲测有效)_locyanfrp-程序员宅基地

文章浏览阅读5.7w次,点赞50次,收藏276次。FRP搭建内网穿透1.概述:frp可以通过有公网IP的的服务器将内网的主机暴露给互联网,从而实现通过外网能直接访问到内网主机;frp有服务端和客户端,服务端需要装在有公网ip的服务器上,客户端装在内网主机上。2.简单的图解:3.准备工作:1.一个域名(www.test.xyz)2.一台有公网IP的服务器(阿里云、腾讯云等都行)3.一台内网主机4.下载frp,选择适合的版本下载解压如下:我这里服务器端和客户端都放在了/usr/local/frp/目录下4.执行命令# 服务器端给执_locyanfrp

UVA 12534 - Binary Matrix 2 (网络流‘最小费用最大流’ZKW)_uva12534-程序员宅基地

文章浏览阅读687次。题目:http://acm.hust.edu.cn/vjudge/contest/view.action?cid=93745#problem/A题意:给出r*c的01矩阵,可以翻转格子使得0表成1,1变成0,求出最小的步数使得每一行中1的个数相等,每一列中1的个数相等。思路:网络流。容量可以保证每一行和每一列的1的个数相等,费用可以算出最小步数。行向列建边,如果该格子是_uva12534

免费SSL证书_csdn alphassl免费申请-程序员宅基地

文章浏览阅读504次。1、Let's Encrypt 90天,支持泛域名2、Buypass:https://www.buypass.com/ssl/resources/go-ssl-technical-specification6个月,单域名3、AlwaysOnSLL:https://alwaysonssl.com/ 1年,单域名 可参考蜗牛(wn789)4、TrustAsia5、Alpha..._csdn alphassl免费申请

测试算法的性能(以选择排序为例)_算法性能测试-程序员宅基地

文章浏览阅读1.6k次。测试算法的性能 很多时候我们需要对算法的性能进行测试,最简单的方式是看算法在特定的数据集上的执行时间,简单的测试算法性能的函数实现见testSort()。【思想】:用clock_t计算某排序算法所需的时间,(endTime - startTime)/ CLOCKS_PER_SEC来表示执行了多少秒。【关于宏CLOCKS_PER_SEC】:以下摘自百度百科,“CLOCKS_PE_算法性能测试

Lane Detection_lanedetectionlite-程序员宅基地

文章浏览阅读1.2k次。fromhttps://towardsdatascience.com/finding-lane-lines-simple-pipeline-for-lane-detection-d02b62e7572bIdentifying lanes of the road is very common task that human driver performs. This is important ..._lanedetectionlite

推荐文章

热门文章

相关标签