Thứ Năm, 29 tháng 3, 2018

Auto news on Youtube Mar 29 2018

Hi.

My name is Jennifer Dowding.

I work for Archiact.

And we are showing Evasion here at GDC.

Evasion is an intense multiplayer VR shooter

where teams of players can get in and fight against an enemy

called the Optera.

It's VR.

It's looking incredible today on the HTC Vive Pro.

And we think it's the next generation of VR gaming.

And we're working with Intel on making it even

better for the I7 processor.

So there's more destruction, more physics, more explosions,

and it looks very high quality on the I7.

And we're actually showing for the first time all four

of our classes.

So we have four classes that players can try out

with four full body IKenima based avatars

with all different weapons and skills.

You can learn about Evasion at evasionvrgame.com.

We're also on Twitter and Facebook or on Reddit.

Check it out.

For more infomation >> Evasion Raises the Bar for Explosive Real-Time FX in VR | Intel Software - Duration: 1:17.

-------------------------------------------

Conqueror's Blade: Immersive Experience on Intel® Core™ Architecture | Intel Software - Duration: 2:24.

Hello, everyone, I'm Xi Wang, producer for Conqueror's Blade.

We are here at GDC.

[MUSIC PLAYING]

Conqueror's Blade is mid-age warfare games.

We're trying to bring everything about war.

So from recruiting your troops and training your troops,

get all the armor crafting, all the armors.

And also, the combat part is most exciting part of the game.

And we are selecting the [INAUDIBLE] weapon

and gunpowder era together, so it's mixing

of the blade and the cannon.

So trying to make the most immersive experience for gamer,

we need to support thousands of the troopers

on the screen at same time.

So we have to design our own engine, which

is super efficient to render massive characters.

And we have team collaboration with Intel,

and so we do a bunch of optimization

for the multi-core.

So you can enjoy the full-range 60 frame per second action game

card of gameplay, and also you can see a massive troopers

around you, giving you the really intensive combat

experience.

The most interesting thing about the Conqueror's Blade gameplay

is the a lot of dimension of the war.

So because of era between the [INAUDIBLE]

weapon and the gunpowder.

So you can use your sword and your shield to fight.

More than that, you can using the cannon,

you can using the battering ram, you can using siege tower.

So all this stuff create a lot of complexity and dynamics

of the battlefield.

That make the combat more interesting.

Beyond the combat, we have a big world.

A big world is 5,000 square kilometer big.

So we can support more than maybe 1 million gamer,

and each of them then created their own house

and their own kingdom.

So it's purely sandbox, which means we have

no definition of the territory.

And this all relies on the gamer to create

his own organizations.

That's something we think is really cool, because that path

will allow the gamers see each other

and to have more depth of the gameplay.

We just finished the closed beta in last week,

so we probably quickly will go to the open beta.

So we hope the gamer can play the game in the next two

or three months.

We have a lot of cool stuff we want to share with you guys,

but we don't have enough time.

So if you want to figure out more detail,

please go to our website, which is conquerorsblade.com.

It has everything you want.

[MUSIC PLAYING]

For more infomation >> Conqueror's Blade: Immersive Experience on Intel® Core™ Architecture | Intel Software - Duration: 2:24.

-------------------------------------------

Will Persistent Memory Solve My Performance Problem? | Intel Software - Duration: 2:15.

Persistent memory is adding a completely novel memory tier

to the memory hierarchy that fits between DRAM and SSD.

It promises to provide memory that

is larger capacity, affordable, and persistent

at the same time.

My name is Thomas Wilhelm and in this video,

we will show software developers how

they can analyze the application in order

to identify if they will benefit from persistent memory.

If you're using Fido today, the question

is if persistent memory will bring you some advantage.

Persistent memory can be seen as an ultra fast SSD.

The performance will only improve if the application

is [? currently ?] I/O bound.

The [INAUDIBLE] Disk I/O analysis is a great tool

to identify if the application is I/O bound

and can help to quantify the impact.

On the other hand, if you come from the top

and you're using DRAM today, when

does it make sense to replace it by persistent memory?

Persistent memory will have much larger capacity than DRAM

at a better price.

The [INAUDIBLE] memory consumption analysis

will help you understand where your application

is using memory and which data structures you might want

to move to persistent memory.

When you keep your data in persistent memory,

you don't need to reload it when the program or the system

is restarted.

This allows you to quickly suspend

your program or a fast restart during planned or unplanned

downtime.

But keep in mind, persistent memory

has high latency which might negatively impact performance

when you move data structures from DRAM to persistent memory.

The VQ memory access analysis quantifies

how much your application is impacted by latency.

Furthermore, it allows you to identify data structures

and functions that are sensitive to latency, which you might not

want to move from DRAM to persistent memory.

Persistent memory creates completely new opportunities,

being a hybrid of RAM and storage.

As you have seen, performance profiles

let you analyze the behavior of your workload

and quantify the impact that it will have on your application.

Thanks for watching.

Don't forget to like this video, subscribe,

and visit the links provided to start

analyzing your application.

For more infomation >> Will Persistent Memory Solve My Performance Problem? | Intel Software - Duration: 2:15.

-------------------------------------------

Optimize Games with Intel® Graphics Performance Analyzers | Intel Software - Duration: 1:04.

I'm Seth Schneider, here at GDC 2018,

showing off the new release of Intel Graphics Performance

Analyzers.

[MUSIC PLAYING]

Here at GDC, we're showing some really cool new features

in Intel GPA.

We're working on Fortnite and Space Pirate Trainer.

So in the Fortnite booth, what we're showing

is our new hotspot enabling mode.

You can actually take hotspot enabling, reorder your frame,

and find Intel GPU hotspots very easily.

We're also taking a look at Space Pirate Trainer.

Space Pirate Trainer is now released

for Windows Mixed Reality.

And what we've been able to do is

optimize it to run at 60 frames per second on even

the laptop form factor, like a Surface.

So here at the booth, we're showing

all those optimizations, how we got from,

you know, less perform IT to really perform IT-- really

great experiences on Intel.

You can download Intel GPA for free at software.Intel.com/GPA

and come check us out.

[INTEL JINGLE]

For more infomation >> Optimize Games with Intel® Graphics Performance Analyzers | Intel Software - Duration: 1:04.

-------------------------------------------

Educational Technology Software, Apps, and Websites For University Students. By Expecto Patronum - Duration: 8:11.

This is Eleap Website.

As you can see here, at this Upcoming Event, you can know is there anything you need to do or submit.

At the bottom of My Course, it will appear the courses you enrolled.

After you clicked in any course you like, you can see all the announcements, study materials, and other study information uploaded by the lecturer.

By just clicking the document, it will automatically download.

Then let's back to My ELEAP Page

You can know your academic calendar at here.

You can know your schedule for the coming semesters here.

Everything is listed in a very systematic manner.

You can type anything using Microsoft Words.

If there is any spelling error, there will be a red line below.

With just a right click, words recommendation will appear.

By just highlight your text, then you can change the font at the font box

You can also change the Size, Colour, Bold, Italic, and Underline your text.

By clicking the INSERT above there, you can easily add a table.

By highlight the text, you can also cut the text then paste it to any part in Microsoft Words.

You can search anything by just typing the key words in the search box.

For Example, I'm searching for a music video.

There will be a lot of video results. You can choose anyone you like.

When there is any advertisement appear, you just need to wait for the timing to skip it.

YouTube video can be easily skipped by just clicking at the loading bar

In the setting part, you can change anything based on the options

The quality of the video can be change whenever you like

When you are scrolling down, you can leave and read comments.

YouTube can be used as study medium too.

Same as previous video, you can pick any video you like to play.

You can skip and change the quality of the video too.

Comments will also be at below the video.

Either you like or dislike the video, you can just click on the thumb right below the video.

You can search anything by just typing the keywords at the search box.

Then you will see a lot of relevant articles with the keywords you typed

Just click in any article you like.

The information about the article are stated clearly.

The references are also stated clearly.

Click on the PDF.

You are able to read the article

There are also references stated in the article.

By clicking the download icon at upright corner of the page,you can download and save it in your desktop.

This is to show how to use SPSS to create a histogram.

There are two view which are Data View & Variable View.

In Variable View, at the column of Name, enter the name based on what your data need.

Then click back to Data View and then key in your data along the column.

Then click back to Variable View, you can see all information shown above

Click on the Analyze at the top, choose Descriptive Statistics, then to Frequencies.

Click on your data to variable column.

Click on charts and you can choose what type of graph you want. After that, click confirm and ok.

Then you can see a systematic table with all of the important information.

When you scroll down, you can see your graph is done

You can read the mean, std. deviation, and total frequency of your data.

Then you can save your work in your desktop

For more infomation >> Educational Technology Software, Apps, and Websites For University Students. By Expecto Patronum - Duration: 8:11.

-------------------------------------------

Text digitization (OCR) software comparison - Duration: 5:16.

Hi and welcome to Intro to OCR. I'm Quinn Dombrowski from Berkeley Research

Computing working with Stacey Reardon from the library. OCR is an important

first step for all sorts of research projects. Whether you have a

computational approach or whether you're working with traditional research

methods where you just need to find things within your text, OCR is an

important first step. Many many primary sources are not

available digitally and so you'll need to scan them yourself and then OCR them

in order to be able to use them for your research. Even computationally

facilitated research often starts out with analog and before we get started

with some of the more complicated software options it's important to note

that there's a place here for Adobe. A lot of people start off doing OCR with

Adobe: they scan their their text and then they run the Adobe Professional OCR

just as part of the PDF program and if your text is in English and doesn't have

any complicated formatting, if the text is really clear you don't have columns,

you don't have a lot of images embedding, that might be enough especially if

you're just trying to search within your PDFs and not trying to extract the text

for computational use. If you've tried to do OCR with Adobe and it hasn't worked

out well, we have a few other options to share with you. ABBYY Fine Reader is a

good option when precision matters. With Fine Reader you can go through your text

and do some degree of training on it so after you run one page you can correct

the errors that it makes so that when it runs subsequent pages it won't make

those same errors or is at least less likely to make some of those same errors.

It provides good OCR quality for complex documents especially if you have

multiple languages, non-english character sets, so it can

accommodate you Chinese, Japanese, Korean, any Cyrillic base language, Greek,

Hebrew, all with varying degrees of accuracy, but generally a whole lot

better than you would get with Adobe PDF. It can also support meaningful layout so

there's a way they'll recognize tables, it can recognize irregular

columns, and it will then allow you to export your data still within that

structure as a table or as a column. As I mentioned it supports training and

correction and it runs AEoD, the Analects Environments on-Demand,

provided by Berkeley Research Computing. AEoD provides you with a virtual

workstation with research relevant software. It's great for dealing with

licensing issues, operating system issues, and local space and memory issues, so if

you don't want to pay for an ABBYY Fine Reader license yourself, they tend to run

around $100, you can use the AEoD desktop in order to log in, do your OCR,

get your data, and log out without having to purchase the software or

maintain it on your own laptop. Another option is Tesseract and it's good for

good enough OCR at scale, good enough may vary depending on your project. Training

is possible for it but it's somewhat technically challenging and training

data for specific use cases may already be available so you might not have to

train it. For instance there are projects that have done training sets for legal

documents and early modern documents you really should expect to do

computational post-processing with Tesseract. What you're going to get out

of it isn't the OCR embedded within the PDF like you would with ABBYY

Fine Reader or with Adobe, you're going to get a text file out of it and it's

going to be a text file probably that has a lot of predictable

errors on it so doing some conversational clean up after the fact

is going to be really necessary. It runs on the Savio high performance computing

cluster offered by Berkeley Research Computing and we're considering doing a

pilot of OCR as a service where you could send us your documents and we can

OCR them for you and send you back the text documents and if you're interested

you can see the URL at the end of this video.

Tesseract sometimes can work really well, so here's an example of a a newspaper

document with fairly clear text and there's a few errors here you can see

July 28th in the middle of the document didn't quite come out right but for the

most part everything is here. The text doesn't reflow the way it does in Adobe

or in ABBYY Fine Reader where the hyphens at the end would need to be edited out

to merge the words together when you reach the end of the column but

it does get all of the text. Sometimes though it doesn't work at all so

here's another newspaper article that looks really similar except there's an

image on the left, you've got this black bar on the right, and Tesseract just

really didn't know what to do with that. It's also possible, as I mentioned, to

connect with disciplinary OCR communities so there was a Mellon

funded project, the early modern OCR project or EMOP, that developed some

training sets for Tesseract for early modern texts, and in this case the the

image on the left just had too much too many speckles, too much dirt,

it just wasn't clear enough to be able to really take advantage of that

OCR training set, but it did a reasonable job catching some of

the things that it wouldn't have caught if we just use the the default training set.

For more infomation >> Text digitization (OCR) software comparison - Duration: 5:16.

-------------------------------------------

Customize Elements of your Battles with Total War: Warhammer II: The Laboratory | Intel Software - Duration: 1:52.

Hi, I'm David Taylor, here at the Intel booth at GDC 2018,

demoing Total War, Warhammer II--

The Laboratory.

The Laboratory is free DLC game mode made in collaboration

with Intel.

We worked with the developers to add various sliders--

16 different sliders-- to change the way

that you're able to play the game.

You know, you can scale up or down the gravity.

So when the big guy hits them, the AI

goes flying into the air.

You can up the count of troops that you have on the ground.

Physics, particle effects, explosions,

all these different sliders to mess with it

so that you can really experiment, and just

create as much chaos as you can in that game.

Core optimization is an important part

in game development-- being able to offer features to those

who have high end systems.

You can offload things from the GPU

to get better performance, better frame rates, better

physics, better AI interactions within the game.

And for a game like Total Warhammer,

where you're taking the troops, and telling them where to go,

and the computer is telling where to send their troops,

it really provides a whole new scope.

If you want to check out more on core optimization,

and how you can take advantage of the most powerful CUPS,

check out software.Intel.com/gamedev

and check out Total War Warhammer II with the free DLC,

The Laboratory, out now.

For more infomation >> Customize Elements of your Battles with Total War: Warhammer II: The Laboratory | Intel Software - Duration: 1:52.

-------------------------------------------

Vinyl Sign Making Kit - USCutter MH 34" Vinyl Cutter BUNDLE w/Software, Vinyl, Tape, Blades - Duration: 0:45.

Vinyl Sign Making Kit - USCutter MH 34" Vinyl Cutter BUNDLE w/Software, Vinyl, Tape, Blades

34 MH Cutter w/VinylMaster Cut - Design and Cut Software. Three fully adjustable pinch-rollers allow you to use a flexible range of materials.

Superior ball-bearing dual roller media roller system. Roland compatible blade holder gives you access to the most economical blades available. Vinyl rolls, tape rolls, knife, squeegee, Roland type blades, Lifetime phone support.

For more infomation >> Vinyl Sign Making Kit - USCutter MH 34" Vinyl Cutter BUNDLE w/Software, Vinyl, Tape, Blades - Duration: 0:45.

-------------------------------------------

Sprint Vector Enhanced with Intel® Core™ i7 Processors | Intel Software - Duration: 1:17.

Hi.

My name is David Nassau, here at GDC

to talk with you about Sprint Vector,

playing on our new enthusiast 8th Gen Core i7 products.

[MUSIC PLAYING]

Sprint Vector is an exciting new virtual reality

game where you race around tracks at maximum speeds

by moving your arms back and forth as fast as you can.

It's optimized for our latest 8th Gen enthusiasts

core i7 platform, and is available now today

on Steam and the Oculus store.

In Sprint Vector, you select one of eight characters

and race with friends for a competitive VR multiplayer

experience around a number of different tracks with varying

environments.

We work with developers to include various features,

physics, and effects into their software

so to have the most premium virtual reality experience

on our Intel core i7 platforms.

Experience Sprint Vector for yourself today.

It's available on both Steam and the Oculus store.

If you'd like to learn more about how Intel works with

developers, specifically for virtual reality titles,

please visit software.intel.com/vr.

[INTEL MUSIC]

For more infomation >> Sprint Vector Enhanced with Intel® Core™ i7 Processors | Intel Software - Duration: 1:17.

-------------------------------------------

Video download from anywhere without any Software - Duration: 3:00.

For more infomation >> Video download from anywhere without any Software - Duration: 3:00.

-------------------------------------------

2016 Copper Mountain Technologies USB VNA software overview - Duration: 29:34.

Hello there my name is Ben Maxson and I'm an Applications Engineer with Copper Mountain Technologies based out

of Indianapolis, Indiana and welcome to this video overview of the Copper Mountain Technologies

USB VNAs. What you see in front of you right now is our website, the main landing page

of our website, and what I thought I'd do to get started is to show you how to access the

software of the instruments and get that installed. And while we go through that I'll also mention

some aspects of the USB connection. Which is what makes our instruments but different

from the traditional instruments that are out there. You're probably very familiar

with those. So to get started with the software let's go to the support page and find "Downloads".

Here you can find downloadable executable installers for all of the different instrument

families. There are four families. The R-series is our 1-port VNAs. The S2 series are the

2-port VNAs which can reverse, meaning you can measure all 4 S-parameters. S4 is predictably is the

4-port VNAs. TR is for our TR series of 2-port VNA is that only measure two parameters;

they only generate from Port 1. So those are are called 2-port 1-path VNAs. So today

let's let's go ahead with the S2 software installer. It's downloading. As it does so I'll

just mentione that the USB interface is really what makes our instruments different from

the traditional instruments that are out there. The USB connection is a plug-and-play

interface which is pretty well-suited for Vector Network Analyzer Applications actually

because the raw data rate of VNAs is relatively small especially compared to the bandwidth

of USB. Even USB 2.0 is hundreds of times faster than would be required for a VNA so unlike

something like a real-time spectrum analyzer the USB interface is not bottleneck in any

way for the VNA. So it looks like our download is completing, we'll just go ahead and run that. You may encounter some security

messages as you as you go through the installer depending your browser and operating system

configuration. I'ts safe to ignore those some. So, here we have our first decision to make with the installation

the analyzer model can be selected from among all of the 2-port VNAs that we offer.

You can you can choose to autodetect the VNA, and that's probably the best selection so that

you doon't have to dig through the menus to find the correct analyzer once it's installed. You can

also choose initial analyzer model. I'll use the default of C1209. it's also possible

to lock the installer to a specific serial number. That can be useful if you multiple

instruments connected to the same PC and want to use a different software for specific instruments.

Now we see the demo mode option. Demo mode is basically a simulator of the VNA so it's

possible to run the Demo mode software without the physical instrument. I'm going to choose to

do that today since I don't have access to the VNA here, but if you have the actual VNA you would probably

want to uncheck this box and just install in Full mode. You always can change this setting later.

Here we choose our installation path; I'l choose the default which already exists on my computer.

That's fine. And the full installation includes some operating manuals, programming manuals and

and guides, and some other useful utilities and tools. We'll go ahead with everything. Desktop icon will be fine.

And now we want to make sure that we leave this box checked on the last page to register

the COM server. The COM server is necessary if you want to automate the measurements from

a program like LabVIEW, Matlab or any other programming environment. We'll come back to

that but the key thing here is to make sure that we do register the COM server when we install.

And here see the success dialogue that that did indeed happen. So now's we're ready to launch the VNA software. I have a shortcut on my

my taskbar to do so. And here we are, we're up and running with the simulator mode. You see here in the title bar here

we see simulator mode which is also known as Demo mode. So this is a simulated result

the simulator's actually pretty sophisticated in the sense that if we change instrument settings. The simulator is actually pretty sophisticated,

in the sense that if we adjust the setings it will adjust the sweep speed and things to approximate the the reality would see we

if we did have a physical VNA running. OK so now, let's go ahead and explore the user interface

here. If you're familiar with VNAs you're probably are thinking this looks pretty familiar.

You have menu buttons along the right side of the window here, as well as a drop-down menu

along the top. These have more or less equivalent functionality so you can use whichever you

prefer. In general the the right side menus are my preference. In some cases, some of instruments

have just a subset of features in the drop-down. So you can always get the full set of features

using the right side menus. And that's what I'll do today. So to start with, we might want to configure

our stimulus--the generator of the instrument. You can see that right now it's set up for the start

and stop frequency of the full frequency range of the C1209 VNA. If we wanted to adjust those

we could do so by just clicking on these and entering in a preferred start value. We'dl use 5-capital-M for megahertz

and maybe we'd want to stop at 8-capital-G for the Gigahertz. Alternatively you can click on these axis

labels directly, and enter new values down here. And also, if you mouse over the start of the

axis, you can click and drag. See that? So I can increase the start frequency to a prefered value

using the graphical representation here. Why don't I zoom in just on the passband of this filter.

We're simulating a bandpass filter basically. And now I'll show you how to change the display format. So one

way is to click on the display format indication above the plot and just choose display format you prefer.

So, choose a Smith chart for example. Alternatively, and we'll go back to the main menu

by clicking at the top of this menu. There's always this little back arrow here to remind you. We could

also change the format using the format menu. So here are the different options we have for

the display format. I'll go back to the Log Magnitude. OK so returning to stimulus just briefly,

we can also specify the number of points, sweep type--all the instruments have a power sweep option

except for the one port VNA's. That's included as a base feature. As well as the segment mode.

You can set the output power of the instrument. So currently it's at its default of 0 dBm. C1209 can go

as high as +15 dBm. And of course if you enter value in any of the control fields which is in excess

of instrument's capability it will just limit to the to the maximum or minimum of the instrument

accordingly. We can also set a measurement delay and some advanced features here related

to the trigger, but the for purposes of a quick demonstration, we'll leave those at their default values.

So a next point to show you will be multiple measurements. So right now we're

looking at just S11 in a Log magnitude format. If we wish, we can add multiple traces, up to 16

traces per channel. And we can spread those up to multiple graphs so we can see them individually.

So here we have S11, S21, S12 and S22, all four plotted at the same time. Why don't we go back to

the full frequency range of the analyzer, so that we can see the overall response. So here you can see it is

a bandpass filter, with reflection parameters here on these diagonal graphs. Let's try to find the

stopband of this filter, because it's off the bottom of the display right now. So we'll

do that on scale menu. We can choose Auto Scale All to automatically adjust the parameters

of all plots in the channel according to their current measurement values. There we are, now we can see the stop

band noise down here a little bit. And if we wanted to see a little further down into that noise, of course we can reduce

our IF bandwidth. It'll slow measurements down but allow us to see a lower noise floor. So we can

do that either on stimulus menu--here you see IF bandwidth-- or on the Average menu, or using this

control at the bottom below the plot directly using the mouse. If we right-click there we see the full

range of IF bandwidths available. Let's choose something more like 100 Hz, so now you can see the noise floor has

dropped off the bottom again. We'll go back to scale and Autoscale All. And now we can see the noise

floor its significantly reduced, about 100-110 dB. So we're looking down there pretty

good into the noise. And of course we could reduce it even further but it would slow the sweep down

even more. So we'll not do so for. I'll also also show you an addition to the IF bandwidth, on the

averaging menu you can set the number of averages and enable averaging. You can also enable smoothing

if you wish. And now you can see the noise is starting to drop even further. We'll see the number of averages reflected

down here instrument as the instrument continues to sweep. So I showed you how to make multiple traces. I'll also mentione

possible to also have multiple channels--up to 16 channels--with up to 16 traces per channel.

An additional channel allows you to have a separate control over the frequency range

and output power. So the stimulus for all the traces in a given channel is the

same. That's sort of what makes a channel different from traces. If you need say a frequency sweep

and a power sweep, you could add a second channel and configure it for power. Or if you wanted to see

a wideband response with one plot and the passband response with the second, you could do so with a second channel

as well. As an example we can put a Smith chart here with just our passband. And explore our

narrowband response on right, and wideband response on the left. For example. Normally people will be using

just one channel and multiple traces in the channel. OK so that's the basics of the stimulus

set up and display format controls. There are many more options here, but in the interest

of time, I'll just highlight the main points. So calibration will be next next point

we can talk about. Calibration of the VNA of course accomplishes two goals. One is to compensate

for the VNAs exact parameters at the time of measurement. The second is to compensate

for losses and delays of any fixtures that are connected between the instrument's ports

and the device under test. So that includes any cables, adapters, in some cases amplifiers and

other components. You want or you want to remove the response of those fixture elements

from the measurement. You want to measure only the device under test. That's the second

main goal of the calibration process. So there many many types of calibration, variants of

calibration algorithms. The common thing of all those is that there are three main steps.

One, is you have to know the correct response--the true response--of one or more calibration standards.

And obviously you need to tell the instrument what those are. Secondly you need to measure

those standards with the instrument. And then third, and the instrument takes care of this

for you, you need to compute the difference between the known response and the measured response,

and calculate what the correction parameters should be in order to achieve the the best

possible, most accurate response, given those two pieces of information. The first step,

defining the known response of one or more standards, we'll accomplish on the cal kit

menu. Click Cal Kit and here we see below the plot a list of some very popular calibration

kits from a wide range of vendors, including some Copper Mountain kits but also waveguide

kits from Flann, some popular kits from Keysight (Agilent kits), Rosenberger, Maury, Spinner. And of course

this is a list of the most popular kits. But if you have a kit

that's not in this, it's pretty striaghtforward to either modify one of the existing kits or to create

your own using the empty spaces at the bottom of the table. We've got plenty of room

there to define a lot more kits if we need to. Just quickly we'll dig in to one of the kids here, the

default selected kit. And you can see here are the parameters of the kit. Rregardless of what

kit you're using the vendor has provided you with definitions of the standards similar

these. These can all be edited by simply clicking in the table. You can import and export

these to files as well. And here's where you'd enter the polynomial coefficients for

your Open and Short. And provided delays as well. Basically that's how you define a kit. So now,

how do we use that kit to do measurements? Well, you come back to the calibration menu

menu. Click the calibrate button. And here you can see some options for different calibration approaches.

Those range from simple normalizations of the, Open, Short, and Thru to a full 1-port

Cal, full 2-port cal, 2-port 1-path cal--if we happen to just be measuring S11 and S21 (or S22 and S12).

And also TRL calibration. And adaptor removal, unknown thru removal are some advanced

calibration options we have. Most commonly people would be doing a full 2-port calibration

on a 2-port analyzer. And again this is the demo software, so calibration is not going to be a terribly

useful thing. But if we were using the physical instrument, we would just simply connect the standard to our

fixture, enter this menu, click on the corresponding standard type, the instrument would perform

the measurement--we can do that step here--and you can see it performs the sweep, beeps to let us know it's done. We

see a checkmark here, and if we come back to the higher-level menu we see there's a checkmark showing

us we've done the Port 1 Open. We would just need to proceed through Port 1 Short, Port 1

Load, Port 2 Open, Short, Load. Thru, and here we would have option for Thru or Unknown

Thru. All of 2-port 2-path instruments support unknown thru calibration. Isolation

is an optional step, it's normally not recommended unless your fixture is especially prone to to

poor isolation. Reason being that the dynamic range the analyzer is well in excess of its

isolation and so the isolation calibration will actually reduce the dynamic range if it's used unnecessarily.

If we were to go through all the necessary steps here we would have the Apply button clickable. Click that,

and corrections would be computed and applied to the measurement. I'll cancel since

again we're using the demo software. But we would see corrections enabled here and of course

we go always turn those back off or on to see the effects of our calibration later. OK, so that's calibration

in a nutshell. I'll just you quickly the markers as well. So if we want to add a marker to a trace, we just select

the Trace. You can see which trace is active by this indicator in the title bar. You can click on Add Marker.

You can see that by default, the marker's coupled across all the traces in the channel. If we want a

different functionality we can go to Marker Properties and disable Marker Coupling. Now you can see each marker

can be repositioned independently of the other traces. You can also select the active only, which

will, when a new marker's added and active only is selected, it'll add it only to the active trace.

Also, if you wish to see the marker data summarized in a table it's possible to come

in here and enable the Marker Table. And that just shows you as you all marker data duplicated in this table

below the graph. It can sometimes be a little more legible there. You can also reposition

any of the markers by clicking and dragging to more convenient location on any individual

plot if you wish. So that's Markers. Now let's explore the Analysis menu quickly. You can see a number of different

analysis options, including time domain and gating. Notably those are free and included standard

features of all of our instruments. A nice convenience to have. Fixture simulation allows

you to embed or de-embed S-parameters. If you happen to know the S-parameters of

a component in your fixture you want to remove, you can do that here. Conversion allows you to convert

convert to reflection or transmission parameters instead of the linear plots here. General conversion

is simply more complete mathematical model of the same. Limit testing allows you to

set Pass/Fail limits for each individual trace. It's quite sophisticated you can have

multiple different frequency ranges, some minimums, some maximums, with a Pass/Fail indication directly

on the plot if you wish. It's very helpful for production environments, where the operator

may or may not know exactly what how to interpret the measurement result. There's also ripple

limit, and some other options there. So explore the analysis menu at your leisure. Finally

I'll mention the Save and Recall features and I'll mention a little bit about automation of the

software. So there's a number of different options for saving and recalling the data and

the plots in the analyzer application. First, there's a saving the state. Before we go there, I'll

mention that it's important to note the Save Type. This will dictate what type of file we're going to save when we

click Save State. We can either save just the instrument state meaning its settings, state

and its calibration data, that' the default, the state of instrument and its trace--

the actual trace that's displayed on the plot--or we can choose All if you wish

to save State, Trace, and Calibration. I'll just go with the default of State and Cal. So here we

can come in and save the state to any of these predefined state files, 1 through 10.

Or if we prefer to give it a more intuitive name we can save it with a file name "Mystate"for example.

So now, if we we change our settings and want to get back to the known state file,

we can just go to Recall State, File and choose the same. And the settings will be restored including the

alibration data. That's a nice way to get the instrument back to a known state and especially useful if

you're performing a few different measurement types over and over. You can

get the instrument configured exactly the way you need it for a particular test, save the state file,

and then simply recall the state file to get everything back the way that you need it for that test

if you need to get back there again. That's saving state. You can also save Channel, which is

helpful if you wanted to say duplicate all of the settings of this channel into a second

channel. You can save it to Channel A and recall it into a second channel is in this menu. And

finally, I'll mention also you can save the trace data, if you wish, to a CSV file. This will save

the active trace and it will save the frequency as well as the magnitude and phase if you've selected

a complex display format into the CSV file. Which you can then, of course, open in Excel or any

other suitable program. And finally you can save the data to a Touchstone file, an S2P file,

very commonly used for S-parameter data sharing between applications. And there's a number of options

there. Interestingly can also recall data from a Touchtone file. You can recall it either to

replace the measurement data, or into the memory traces. That's especially useful

if you want to compare the live measurements to a saved measurement result that's in a Touchtone

file. And then finally, on the system menu, System > Print allows you to save the plot to

file, as opposed to the data itself. So Print Windows will just take a screen grab of the

plot area and save that to file on our hard drive. Which of course we could then copy and paste into an email,

if we lcoate on this we can rename this to something more intuitive, and save it off into

a shared folder. It will open with your default image handler once it saves. Print > MS Word will actually,

with a single click, create a Word document and paste the screenshot into it. Which can

be helpful for semi-automated production testing. Maybe there'd be a field there for an

operator to fill out some additional fields manually. Print > Embedded is helpful for

literally printing the file. It uses Windows' embedded print dialog to to literally

send the the plot to a printer. Including print to PDF which can be nice. And then the final

thing to note on the system menu: here's where we find the preset button, where we can get the instrument

back to all of its default settings if we wish to do so. Sometimes, you can get off in the

weeds, and it's nice to get back to a known good starting point. I'll just put my four traces back up quickly here.

I'll also mention System > Plug-ins. Here you'll find a number of different software extensions to the

mainland software. And we can create plug-ins like these in response to your particular

test requirements If there's something you need the instrument to do that's just not

quite there in software, let us know, by all means, and we'll see about making a plug in

to support that functionality for you. The plug-ins make use of the automation interface of the

instrument, which is what I want to show you next. The instrument's fully programmable. You can

find its programming manual in the \Doc subfolder of your install path (of course that's on our website

as well). This document goes through in quite a bit of detail all the different commands

that are available. You can see it's quite a long document, 400 pages long. Going through

basically every command of the instrument. There's more less a command for every button

on the menu. So basically everything you can do through the user interface you can also

do using an external program for automation. You just need to find the appropriate command

and and send it from your your programming environment. The automation interface makes

use of COM server technology which is a Microsoft Windows standard for sharing data

between Windows applications. It's actually very straightforward. Automation is possible

from a wide range of environments including Visual Basic in an Excel file for example,

C++, Matlab, Python, LabVIEW, C#, VB.NET, VEE, LabVIEW/CVI, basically any programming

environment that supports a COM--also known as ActiveX--interface, can support automation. Just

to show you an example of a command definition, you can see here whether it's a property or

method, if it's readable or writable or both, the arguments of the command, and an example of

the syntax in Visual Basic syntax. The syntax is similar for different programming languages

with small variations, but this gives you an idea of how to get started. OK, well, that's been

a quick overview of the Copper Mountain Technologies USB VNA. I hope you find it interesting.

If you have any questions about any aspect of software, or the products, of course feel

free to contact us anytime. You can reach us at support@coppermountaintech.com. You'll

find links to that peppered all around the website; any of those will work. For

quoting, demo units and other questions sales@coppermountaintech.com is your

best bet. We look forward to hearing from you. Thanks for watching.

For more infomation >> 2016 Copper Mountain Technologies USB VNA software overview - Duration: 29:34.

-------------------------------------------

Savio introductory training: Logging in to Use GUI Software (part 7) - Duration: 0:50.

If you would like to use a program on Savio that includes a graphical user

interface for example MATLAB, you would add this -Y to the SSH command,

and this will let you use X server software on Savio to use a graphical

user interface. Now for your own computer to actually display the Windows, you must

install X server software on your own machine as well. For Windows you can use

Exceed or Xming and for Mac you can use Xquartz.

Không có nhận xét nào:

Đăng nhận xét