Wednesday, November 28, 2018

Why We Desperately Need Better Cybersecurity

The Internet of Things is an idea of potentially unending consequence and infinite possibility. 
 
Essentially, it is the drive to make every device in our everyday lives communicate with other devices over the internet. It would mean that your entire house can be controlled with your phone and, one day hopefully, your entire life. Everything from your car to your refrigerator will be able to communicate, not only with your phone but other devices and servers all over the world. 
 
New problems are presented by the Internet of Things
The potential of the Internet of Things, as you might assume, is positively staggering as an entirely interconnected world would mean unprecedented access to data that can be used to shape the future. It is a goal towards uniform access to the internet and the ability to communicate with other people all over the world. 
 
To create a world above the physical, to make an internet without borders. It is the dream and fascination of many tech entrepreneurs and tech writers as we see the day of complete internet coverage draw near. 
 
In this time, however, we must consider every eventuality and potentiality if such actualization were to occur. 
 
This is a truly ailing problem as the threat of hacking becomes much greater given the sheer number of access points that are being created in pursuit of the Internet of Things. In this article, we are going to examine the multiple reasons why we need better cybersecurity in the incoming Internet of Things.
 
A massive network and the law of averages
By the sheer principle of technological evolution, the number of access points that are being created is going to be a real problem for users around the world and a real joy to hackers everywhere. As major tech companies push for greater access, devices must be made and acquired for potential internet users to access the internet. 
 
There is no shortage there, however, as the number of smartphones, and companies making them, seem to increase almost daily. It is no small feat to manufacture a nation's weight in phones, but our major mobile companies are doing it with ease. 
 
The problem lies in the fact that every smartphone is a potential access point for any malicious actor to exploit. The law of averages alone dictates as the overall number of devices increases so will the number of hackers. This isn’t including all of the laptops and tablets that are already out there being used with malicious intent. 
 
Security is all about approach
It is an impossible feat to require all smartphones to be encrypted, but, if anything, this means that it is even more important for us to implement cybersecurity on all our devices. If you are not, then you may very well fall victim to one of the many hackers attempts as their network grows daily. If only by the numbers alone do, we need greater cybersecurity.
 
The interplay of devices, along with the numbers, show an immediate and growing need for better cybersecurity. The question is, how do we implement such security over that many devices. The answer is we can’t do it uniformly across the globe. Truly, the only way to implement true uniform cybersecurity is to educate anyone and everyone who has access to a smartphone, tablet or computer. 
 
Safety is in everyone’s hands now
This may sound tedious, but, even a simple pamphlet inserted into every box or a default program that explains, in detail, how to set up your own network security. 
 
Anything at all would be better than leaving everyone up to their own devices to figure out how to deal with the growing hacker threat. Unless someone is told they most likely won’t know how to create strong passwords or use a VPN. They most likely wouldn’t know how to even check for viruses or get rid of them.
 
In truth, most users, in general, do not know how to encrypt files or networks. The basic security standards are barely adequate to deal with the whole host of malicious software that battles around the web. That is why education is so important. The only true way to protect our global data is t Informa every user on the planet and put the power in their hands.
 
Conclusion
The Internet of Things is an idea worth getting excited about, but, the risks are not to be ignored. With the mass integration of our devices well underway, we must make sure to understand everything we can about cybersecurity. There is no complete answer, but, we know it starts with education. The numbers alone are enough to make your hair stand on end about the prospect of millions of compromised devices. 
 
We cannot let this deter us from the future as the fight can be won with education and perseverance. We must not let hackers and malicious actors steer us from the path of progress. If we stay informed and well planned, then we will ultimately prevail in our race to the future.
 

How Tech Makes Investing More Accessible

New technology is revolutionizing what the average person can do with money. We’re seeing the rise of new types of investments, higher levels of accessibility for existing investments, and more efficiency, which ultimately leads to even further consumer engagement. So how, exactly, is this accessibility improving, and where can it go from here?
 
Brokerage Platforms
 
The emergence of online brokerage platforms completely changed the game for investing in stocks and bonds. While this hasn’t been good news for stock brokers in the financial services industry, it has made an otherwise complicated and confusing method of investing more accessible. Modern platforms allow average people to place trades with a single click, and some are even able to offer low- or no-cost trades, such as Robinhood’s famous “free trade” model.
 
Real Estate
 
New technology has also made real estate investing more accessible. Historically, real estate investors have been limited to investing in their surrounding locations, but thanks to the presence of virtual tours and similarly immersive types of tech, it’s possible to view and inspect properties remotely. And with the plethora of online options available, you can easily find a property management service provider who can help you manage the property remotely.
 
Crowdfunding and Loans
 
Online interactions are also opening the door to new opportunities, based on the amount of visibility you can generate with an international audience. For entrepreneurs or inventors looking to gather the funds necessary to make their business or product a reality, crowdfunding platforms like Kickstarter are available.
 
In the wake of crowdfunding, there’s been a push for more crowd-based fundraising and investing platforms. Peer-to-peer lending, the process of contributing capital to crowdsourced loans, is becoming more visible and more popular, with platforms like Prosper leading the charge. And despite being heavily restricted and regulated in the past, there are more opportunities than ever for equity crowdfunding.
 
Research and Engagement
 
We also need to consider the vast number of tech-based resources available to the average investor. Smart investments aren’t based on “right” or “wrong” decisions; instead, they tend to favor people who balance the strengths and weaknesses of each investment they make and understand the consequences and potential payoffs of their decisions.
 
Resources like SeekingAlpha have made it easier for people to read and understand detailed analyses of stocks, bonds, ETFs, and other investment types, as well as post analyses of their own. And of course, the emergence of social media and other forums have democratized the conversation.
 
Fees and Transaction Speeds
 
New tech is also making transactions of stocks, bonds, currencies, and other investments much faster—and cheaper for the average consumer. Over the past several years, the average transaction fee has plummeted, and it’s only going to drop more in the future as brokerage platforms become more competitive. New systems are capable of handling much higher volume with a fraction of the resources, and consumers get to reap the rewards.
 
Why Accessibility Is So Promising
 
The accessibility dimension is important for several reasons:
  • Wealth and retirement. When people invest wisely and consistently, they can build wealth, and work toward a financially stable retirement. This could ease the burden on social systems like social security and Medicare, and ensure that more people have access to the resources they need to live a comfortable lifestyle.
  • Profitability. Though fees are decreasing, financial institutions stand to profit more when more people are using their platforms. More money is in circulation, which drives economic growth, and the people investing in these financial institutions stand to gain just as much.
  • Opportunities. There are also more opportunities for the average consumer. For entrepreneurs and inventors, crowdfunding is a possibility. For those in dire financial straits, crowd-based loans can serve as a bailout.
 
The Future
 
So how could better tech make investing even better in the future?
 
For starters, we’ll see development along similar lines as we’ve seen in the past. Transaction speeds will increase, transaction fees will decrease, and new platforms will consistently emerge to offer consumers new choices for investment. We’ll also see a trend toward democratization; fee-free trading platforms and blockchain-based currencies are just two examples of how decentralization and crowd-based technologies can transform the world of investments.
 
We’ll also likely see more integration and more outreach, making investments available to populations who might not have otherwise gotten involved. Investment platforms integrated with common social media platforms, or even mainstream bank accounts, for example, could introduce the idea of investing to a wider audience.
 
In any case, the world of investing is likely to continue evolving for the foreseeable future. In the span of a decade or two, even our current understanding and approach to investing may become unrecognizable.
 
Larry Alton is a professional blogger, writer, and researcher who contributes to a number of reputable online media outlets and news sources. A graduate of Iowa State University, I’m now a full-time freelance writer and business consultant.

What is Omni-Channel? 10 Examples of Brands Providing an Excellent Omni-Channel Experience

Technology has permanently changed the way brands interact with customers. Although shoppers have a multitude of options when it comes to buying products, they still prize the in-store experience. Instead of replacing brick-and-mortars, though, technology has created an opportunity for customers to move from store to website to phone, chat, and text message, all at their own convenience. This has driven a need for brands to start investing in business management tools that can help them reach customers wherever they are, whenever they have questions or need help.
 
If you’re still trying to come up with ways to move your own business to the next level, you don’t have to go it alone. You can take inspiration from some of the top brands in omnichannel, each one innovating their respective industries by emphasizing customer experience.
 
Disney
 
Leave it to Disney to lead the pack when it comes to making customers happy. Guests to the Magical Kingdom have their entire experience orchestrated through an app and a wristband. Together, these items help paid customers book restaurants, access their on-site hotel rooms, enter any park without showing a ticket, and pay for items. The result is a worry-free experience for parents and their kids.
 
Best Buy
 
The key to Best Buy’s strategy is embracing change. Instead of battling showrooming, Best Buy embraced it, offering price matching to any customer who found a better deal online. The company knows the key to keeping customers happy is to make it as easy as possible to go from web to store, including choosing items online and picking them up in their closest store the same day.
 
Timberland
 
To remain competitive, Timberland ramped up its omnichannel efforts a couple of years back, installing tablets in its stores to help customers interact with every product. The customer merely picks up a product and taps it on the tablet, at which point information on the product will be provided. The tablet can also make recommendations for other products the customer might like.
 
Starbucks
 
Many food chains have implemented mobile order and pay, but Starbucks was an early adopter. The company shows how, with a killer omni-channel stack, any business can equip customers with the tech they need. One reason Starbucks’ omni-channel offering works better than other restaurants is that its customers tend to be daily visitors, making it worth it to download an app that lets them order and pay without pulling out a debit card or waiting in line at a cash register.
 
Walmart
 
Having recently rolled out online grocery pickup nationwide, Walmart is leading the pack when it comes to omnichannel grocery buying. Whether customers are buying food or garden tools, Walmart has taken the pain out of shopping. Shoppers can pick out items online and have them either delivered to their vehicle through online grocery pickup or, for non-grocery items, go into the store and straight to customer service to pick them. The company is now working on features like scan-and-go apps that let customers scan items as they go and pay without ever seeing a cashier.
 
Dick’s Sporting Goods
 
Dick’s Sporting Goods’ focus on omnichannel has paid off, with online sales climbing in recent years. The company’s app makes it easy for customers to get information on the products they’re looking at in the store. The company is currently testing a feature that would have mobile devices switch to provide store-specific information as soon as a customer enters one of Dick’s locations.
 
Apple
 
Apple stands out in part due to iBeacon, a product that lets businesses and venues get information on the people based on the technology they carry around with them. This data will let businesses send special offers to those devices or adjust elements like lighting or announcements in response to those who are physically present.
 
Sephora
 
Sephora has already established itself as a respected brand, but its omnichannel approach is gaining even more respect. Using tablets, makeup artists can show in-store customers various shade options on the cosmetics they’re viewing. If a customer wants a product that isn’t on site, the associate can easily order it and have it shipped. Outside of the store, the app remains useful with makeup tutorials, special deals, and more.
 
Chipotle
 
Another mobile order and pay contender is Chipotle, which is trying to beef up its mobile offering to attract more customers. The goal is to make it easier for customers to order online and go straight to the register to pick their food up when they arrive.
 
Crate & Barrel
 
Crate & Barrel is one of the latest retailers to embrace omnichannel, testing tablets in its stores. When a customer enters a participating location, tablets are available that they can use as they walk around the store. With the tablet, they can scan barcodes and get information on any product they see. If they see something they like, they can add it to a wish list and come back to it later.
 
Businesses that want to remain competitive will need to find new ways to reach out to customers, providing them with all of the tools they need. Whether it’s making your team reachable in a variety of ways or combining the online and in-store experiences seamlessly, omnichannel is more than a trend. It’s here to stay.

Careers in Healthcare Technology: Advice from an Expert

For this month’s career feature, we interviewed Oliver Amft who authored "How Wearable Computing Is Shaping Digital Health" in the January-March 2018 issue of IEEE Pervasive Computing. Amft is the founding director of the Chair of eHealth and mHealth at the Friedrich Alexander University Erlangen-Nuremberg (FAU), where, since 2017, he has been full professor. Amft coordinated European research consortia such as GreenerBuildings and iCareNet, and is a principal investigator for several other European and national projects. He has co-authored more than 150 refereed archival research publications in the fields of context recognition, biomedical sensor technology, wearable computing, digital health, and embedded systems. We asked Amft about careers in healthcare technology.
 
Computing Now: What types of tech advances in the field of healthcare technology will see the most growth in the next several years?
 
Amft: I believe there will be two key areas: (1) Novel methods for system design and analysis building/expanding into an area of computational manufacturing, and (2) Data mining algorithms that dynamically personalize or adapt according to acquired context information. My group and I are working on both areas as they provide synergies for wearable and implantable medical technology. Our vision is to develop methodologies that optimize systems—from materials to software—fitting personal health needs: to prevent the worsening of disease, recovery, or maintenance of health.
 
Computing Now: What advice would you give college students to give them an advantage over the competition?
 
Amft: The general recommendation that I give every student is to explore and develop their interests, which hopefully results in motivation and enthusiasm for developing novel healthcare/medical technology. There are many areas that require progress and experts in the next decades. We currently see a convergence between material science, mechanical, electrical and computer engineering, as well as computer science. My specific suggestion is to get involved in interdisciplinary teams to learn from each other and learn about your own capacity. It starts with finding a common team language and often leads to innovation along with a lot of fun. Universities often offer project-based courses that are an excellent start—a startup on the side with a group of similarly motivated people can do it too. If unsure where or how to connect, talk to your professors. I often advise and support students outside of the curricula.
 
Computing Now:  If a graduate must begin work as an intern, freelancer, or independent contractor in the field of healthcare technology, what are some tips for building a strong portfolio for presentation in possible future interviews?
 
Amft: You should see it as an opportunity. As an intern you get to see different fields, connect with experts with different backgrounds, and build a network. Mind you: it is not who you know, but those with whom you have collaborated. In health technology, your achievements in interdisciplinary teams will matter. It shows that you have mastered the field and broken barriers between the classic silos. Try to set your goals and work with/learn from the experts around you.
 
Computing Now:  Name one critical mistake for young graduates to avoid when starting their careers?
 
Amft: I often see students with excellent marks in advanced subjects of their curricula. Some of them then fail to succeed in projects, which require they connect knowledge across domains, or they fail to identify complementing specialists to help in progressing quickly. Projects in health technology are usually interdisciplinary. Practicing in problem-based learning settings can help them overcome those challenges.
 
Computing Now: Do you have any learning experiences you could share that could benefit those just starting out in their careers?
 
Amft: As a young graduate, I got exposed to industrial embedded systems development, requiring all (and more) of my skills back then. Choosing these challenges and mastering them motivated me to probe further and grow competencies beyond a single field of study. Today, I aim to inspire students similarly in seminars and scientific projects at my lab.

The Many Roles and Names of the GPU Its versatility has led it to multiple and various platforms

The original use and development for the GPU was to accelerate 3D games and rendering. The acceleration of the game’s 3D models involved geometry processing, matrix math, and sorting. Rendering involved polishing pixels and hiding some of them. Two distinctive, non-complimentary tasks, but both served admirably by a high-speed parallel processor configured as a SIMD—same instruction, multiple data, architecture. The processors where used in shading applications and became known as Shaders. Those GPUs were applied to graphics add-in boards (AIBs) and served their users very well. SIMD is the architectural design, GPU is the branding name, just like we have an x86 CPU (brand) which is a CISC architecture, and an Arm CPU (brand) RISC architecture.
 
It didn’t take long for the mass-produced GPU, which enjoyed the same economy of scale the ubiquitous x86 processor did, to be recognized as a highly cost-effective processor with massive compute-density. As such it was applied as a compute accelerator, and other than an awkward programming interface that only a coder could love, it exceeded the expectations of the users, and the suppliers. GPUs ultimately found their way into the top 10 of the 500 supercomputers, year after year.
 
GPUs were also applied to image-processing workloads in high-end, ultra-high-resolution cameras, robotic cameras, and cameras in smartphones. That then led to the application of GPUs in machine learning and AI, both for training and inference.
 
And it didn’t stop there. GPUs were placed in servers in the datacenter and first used for bursty projects like film rendering as a service from the merchant cloud providers. And that led to the idea of making a remote GPU a virtual GPU, bringing the power of a big (and usually expensive) GPU to an occasional user, or a user that just didn’t have the budget or space for a powerful local GPU.
GPUs then found their way into the x86 CPU, as well as ARM-based SoCs, in the form of shared memory integrated GPUs.
 
As laptops became notebooks, thin and light, the space, power, and heat dissipation needed for a powerful GPU became problematic. Experiments were tried with bringing out the high-speed interconnection used by GPUs known as PCIe, but the complexities of cabling, connectors, and line drivers proved to be too expensive and too cumbersome to be effective. 
 
And then USB-C/Thunderbolt was introduced and changed the equation. Now PCIe signals could be transported across a low-cost high-bandwidth cable and connector making the external AIB/GPU a practical docking option for the thin and light notebooks.
 
The GPU was used in so many configurations, and applications, it became necessary to use a prefix to designate which type of GPU and application one was referring to and so we got the following:
 
dGPU—the basic, discrete (stand-alone) processor that always had its own private high-speed (GDDR) memory. dGPUs are applied to AIBs and system boards in notebooks.
 
iGPU—a scaled down version, with fewer shaders (processors) than a discrete GPU which uses shared local RAM (DDR) with the CPU.
 
vGPU—an AIB with a powerful dGPU located remotely in the cloud or a campus server.
 
eGPU—an AIB with a dGPU located in a stand-alone cabinet (typically called a breadbox) and used as an external booster and docking station for a notebook
 
Schematically, the various GPUs look like the following diagram.
Edge Computing
                                          The many types and applications of GPUs                                             
 
 
GPUs are in PCs, in the form of dGPUs and iGPUs and often both are present in a PC at the same time.
 
GPUs are in smartphones and tablets as part of an Soc.
 
GPUs are in today’s modern game consoles, and are being integrated into automobiles for entertainment systems, customizable dash boards, and the exciting world of autonomous driving.
 
GPUs power supercomputers, servers, cameras, scientific instruments, airplane and ship cockpits, robots, TVs, digital cinema projectors, visualization, simulation, VR and AR systems, and various toys and home security devices. 
 
And it started because there was a need and demand to have faster, more realistic games. But the GPU market is far from a game, it is a mission-critical, market with high demands, high-stakes, and extraordinary development and advancement exceeding Moore’s law by orders of magnitude.
 
Look around, how many GPUs do think are in your life? Probably more than you’d imagine.
 

Five Key Hybrid IT Tips and Putting Hybrid IT to Work for You

The hybrid IT environment is here to stay. But many organizations still haven’t been able to grasp the essential benefits of uniting a mix of workloads that live on premises, in the cloud, on the edge, and/or in co-location. 
 
Whether you’re eager to extend your data center into the cloud for increased capacity and disaster recovery, or you simply want certain applications to reside in the cloud while others on-prem for compliance and cost reasons, there are key things you can do to fully leverage hybrid IT. Here are five tips to get the most out of this multi-source environment:
 
1. Knowing Why is as Important as Knowing How
 
Just because you can doesn’t mean you should. Before jumping into hybrid IT, do an appraisal of your business goals and decide what you expect your hybrid IT environment to do. Don’t just start selecting a bunch of cloud services and begin using them. “Blueprint” your enterprise’s strategic IT plan for half a decade into the future. Try to forecast the services you may want to use down the line. Now dovetail your goals with the required systems, infrastructure, applications and resources you’ll need.
 
2. Data Centers Are Pricey to Operate and Innovate
 
Moving applications and data into the cloud tends to free up lots of internal IT resources. If your IT department is running in fits and starts and always behind, it may be time to offload some of your weightier functions to the cloud. Not only will this reduce the load on your IT team, but also on the facilities team handling power, space and cooling.
 
3. Think Containerization for Better Movement and Application Support
 
Considered a step up from server virtualization, containers are designed to be efficient, lightweight and stateless. You can set up new container instances on-demand by employing either virtual or physical technology. Hybrid IT lets you develop and deploy containerized instances to run applications and store data and operate in a “DevOps” mode. Containers work like virtual machines, but instead of running complete instances of operating systems that eat up resources, they’re very small. They’re not a complete package, but you can build them to create full-fledged instances. This means you can easily move containers from host to host just by shifting container files. Your enterprise benefits from this movement and application support, especially in a hybrid IT environment, where developers can streamline both app development and delivery.
 
4. Create a Single, Seamless Application for End Users
 
Hybrid IT is often thought of as a mix of different IT infrastructures like two public clouds, a public/private cloud or cloud/on-prem combination with managed hosting offerings.
 
Hybrid IT environments like these are made up of various types of workloads, each running in individualized environments. Such workloads may even migrate from one environment to another for application deployment, cost or disaster recovery. As far as the end user is concerned, this complex approach should appear as one, seamless collaborative application. Equally important, the application should perform as efficiently as an app running in a single environment. The takeaway being, applications and information should be accessible anywhere, anytime, and on any device. 
 
5. Have a Workable Backup and Disaster Recovery Plan
 
This cannot be overstated. A well planned backup and disaster recovery strategy should be efficiently operational across all your IT environments. The process should be clear, widely disseminated, and easy to follow. Should a disaster strike, being prepared with a backup plan that you can get off the ground rapidly is paramount. Such failsafe strategies should be easily deployed and carefully maintained to match or exceed your existing strategies and policies. Shifting this responsibility to the cloud offers peace of mind only if the recovery process is simple and streamlined. 
 
Getting the most out of a hybrid IT environment takes both planning and oversight. Organizations eager to reap the cost and efficiency benefits of mixing workloads across on premises, the cloud, edge, and/or in colocation need to consider everything from business goals to the right personnel required to achieve them.
 
About Tuangru
Tuangru’s next-generation data center infrastructure management (DCIM) software is designed for today's hybrid IT environments. Whether workloads reside on-prem, in edge data centers or in the cloud, Tuangru’s DCIM provides managers with a holistic view of their entire infrastructure for management and optimization. The company was recently recognized as one of the fastest growing companies in North America by Deloitte Technology Fast 500™. Tuangru is also a contributor member of The Green Grid.

Will Education be the Next Big Market for Electronic Displays?

For many of us, going to school meant filling backpacks and school lockers with big piles of books and binders overflowing with notebooks. Even today, most schools rely on traditional textbooks that are frequently outdated by the time they’re researched, written, printed and circulated. Budget-strained schools are faced with buying expensive, updated books every few years. Besides offering limited, often obsolete information, books usually only capture the thoughts of a single author. While this is not a big issue for subjects like Math or Chemistry, it is a significant problem when the topic is business or technology. Fortunately, help is at hand.
 
Satisfying the Need for Diverse Up-to-Date Content
 
The move is on to finally give students access to current, diverse content via text, video, and audio. Educators worldwide and those in developing countries are eager for their students to access the vast library systems of the first world, to the classroom content at MIT or Oxford, to the news about NASA’s latest discoveries from the previous week. It’s no wonder classroom learning is undergoing a complete transformation, spearheaded by a growing demand for electronic reading devices in schools. Devices like eSchoolbooks help teachers impart knowledge to students. With eSchoolbooks, all information is synchronized. Students can write on their screens and respond to a teacher or another student’s ideas in real time. They can also go online to watch the most current videos or news on virtually any topic suggested by their teacher, using eSchoolbook devices. 
 
China’s Huge Display Market
 
With its 200+ million students, China is taking the lead in digital learning. Tablets in the form of eSchoolbooks are already in use here. Although currently limited, more than 100 companies are developing products that embrace this new vehicle for learning. The push toward eSchoolbooks is driven by several factors, not the least of which is the growing environmental concern over the nine million trees that are cut each year in China, for paper textbooks. Understandably, China is the single biggest eSchoolbook market in the world – and it’s a market that has government support. They have the will, the resources and the ability to make dramatic changes that can lead the world in education.
 
India Ramping Up 
 
India isn’t far behind China in its eagerness to embrace digital-based education. India’s digital learning market was estimated at $2 billion USD in 2016. Three factors fuel this technology-based transformation. The first is robust demand, generated by over two million schools, 35,000 colleges, and 40 million seats in vocational training centers; the need for smart classrooms has never been greater. Schools in tier two and tier three cities are increasingly adopting the latest classroom technologies. There’s also heavy policy support, with the Indian government planning to liberalize the setup of digital classrooms through various government initiatives. The goal here is to grow the digital education market. And finally, there’s the impetus of FDI (Foreign Direct Investment). From 2000 to 2016, more than $1.3 billion USD FDI has been pumped into India’s education sector, with most investment focusing on digital and tech-related initiatives.
 
The US Has Enormous Potential
 
While the US market for eSchoolbooks in education shows a vast untapped potential, adoption depends on local authorities making deals with commercial eSchoolbook suppliers. A $64.5M deal between New York City schools included content for one million devices. While this underscores the potential of the total US market, it also reveals the fragmented nature of a market driven by local and state initiatives. 
 
A recent Zion Market Research report notes that video-based content in the U.S. education industry is estimated to account for the highest CAGR (Compound Annual Growth Rate) of 5.1% from 2018 to 2026. Video-based content will be increasingly adopted as it facilitates faster thinking, improves problem-solving skills, and reduces training cost and time. The enhanced features in interactive displays led to their major adoption in the U.S. education market. Video-based content will be in high demand, with this segment registering the highest growth rate. Technology-enabled education also dovetails nicely with a generation of tech-savvy school children raised on video games and mobile computing. 
 
Educators Ted Hasselbring and Candyce Williams Glaser note that computer technology has enhanced the development of sophisticated devices acting as an equalizer by freeing many students from their disabilities––everything from speech and hearing impairments to blindness and severe physical disabilities. See “Use of Computer Technology To Help Students with Special Needs.” 
 
Helping Underserved Children Worldwide
 
Digital technology has the potential to dramatically expand access to education to the underserved children worldwide. Half of the world’s 50 million refugees are under the age of 18 and are displaced from their homes for an average of 17 years with little or no access to education. Here the issue is not just non-availability of educational materials but there aren’t enough teachers let alone qualified educators in remote parts of the world. Shortage of teachers in remote towns is not just a third world issue, many parts of Europe and North America also lack qualified teachers.
Imagine kids in remote parts of the world being able to access content from the very best educators in the world, via video clips, via animation and audio.
 
In India, for example, with its rapidly expanding youth (28 million added annually), more than half of its population is under 25. The nation struggles to educate these children, especially when 65 percent live in rural areas. The problem is compounded by a dearth of teachers, teacher absenteeism, and poor teacher quality. Digital aids have recently entered the picture to confront the challenges plaguing the education system. Digital India initiatives like eBasta make digital education via tablets and computers accessible to students in rural areas. Digital learning can help develop critical thinking skills and make students comfortable with technology. (See the full article: “5 problems with teachers in rural areas which are blocking India's educational growth.”) 
 
Two Colorado schools are bridging the technological divide between urban and rural classrooms. The STEM School Highlands Ranch use video and teleconferencing to reach across about 100 miles of prairie to the 100-student Arickaree School District. This use of “synchronous online education” gives smaller rural schools access to the most recent technology. To communicate with the STEM SCHOOL, a state-of-the-art video conferencing camera was installed in the Arickaree School, which rests on Colorado’s high prairie east of Denver. High-tech, remote learning lets one teacher reach students in different classrooms in virtually every part of the state. Synchronous online learning lets teachers anywhere connect with students everywhere. (See the full article: “Could technology help solve Colorado’s rural teacher shortage problem?”) 
 
China’s rural poor face similar challenges with fewer teachers willing to take jobs in remote and impoverished areas. As many as 60 million “left-behind” children are either poorly educated or insufficiently educated at home. In the past, these rural students used textbooks that had not been updated for a decade and their teachers were past retirement age. Today many rural students attend virtual classes, classes that develop their online research skills teach them how to create slides and videos. (See the full article: “Could online classrooms be the answer to teacher shortage in rural China?”)
 
Addressing the Challenges of Emissive Displays
 
Medical professionals have long expressed concern over young children using emissive displays, which may harm their eyesight. But several device manufacturers now offer blue light filters for these displays. In Canada, some insurers even offer free prescription glasses that filter blue light. To further assuage concerns over blue light, there’s Reflective ePaper for eSchoolbooks. It beats out LCD and OLED displays in power consumption and outdoor readability.  Newer ePaper technology adds video and color, which are ideal for eSchoolbook applications. 
 
In addition, eWriting surfaces continued to evolve, with new products like reMarkable's Paper Tablet, which offers lag-free reading, writing and sketching experience using an ePaper display unencumbered by an OS or apps. 
 
The Rise in Myopia: Is Excessive Reading to Blame?
 
The world has been gripped by an unprecedented rise in myopia (short-sightedness). It’s estimated that up to 90% of Chinese teenagers and young adults are impaired. Myopia now affects around half of the young adults in the United States and Europe — double the prevalence of half a century ago. Some estimate that one-third of the world's population — 2.5 billion people — could be affected by myopia by the end of this decade.  
 
Some blame the rise of myopia on more people using emissive display screens on mobile phones, laptops, and monitors. The close proximity at which we use these screens strains the eye. But there may be another explanation. After studying more than 4,000 children at Sydney primary and secondary schools for three years, researchers found that children who spent less time outside were at greater risk of developing myopia. What seemed to matter most was the eye's exposure to bright light. So how does bright light prevent myopia? The leading hypothesis is that light stimulates the release of dopamine in the retina, and this neurotransmitter, in turn, blocks the elongation of the eye during development. Retinal dopamine normally ramps up during the day, telling the eye to switch from rod-based, nighttime vision to cone-based, daytime vision. Researchers now suspect that under dim (typically indoor) lighting, the cycle is disrupted, leading to consequences for eye growth. (See Myopia Boom) 
 
Clearly, digital technology is transforming education as much as Johannes Gutenberg’s printing press did nearly 600 years ago. The need for eye-safe digital displays both inside and outside the classroom has never been greater. Perhaps most importantly, digital displays place vast silos of current information in the palms of those who need it most—our children. Without education, the poor might be getting poorer and the gap between the haves and have-nots will continue to widen until there is an unfortunate ‘reset’, when we might see unpleasant history repeating itself.
 
Editor’s Note: Sri Peruvemba is a Board Member and Chair of Marketing of The Society for Information Display (SID), the only professional organization focused on the display industry. In fact, by exclusively focusing on the advancement of electronic display technology, SID provides a unique platform for industry collaboration, communication, and training in all related technologies while showcasing the industry's best new products.  Display Week 2019 will be held May 12-16, 2019, in San Jose, CA.

Best 35 Developer Job Posting Sites for Employers in 2018

Developers are in high demand. If you’ve decided to hire top developer talent, you’ll have to fight an immense amount of competition. You have the best chance of hiring the most talented developers when you use trusted websites that save you both time and money.
 
From JavaScript experts to freelance mobile developers, there is a website out there that caters to employers looking for reliable technical talent. Unfortunately, there are also many popular job posting sites for employers that may end up wasting your time -- either because they lack features or fail to attract the best in the business. 
 
Your company needs the best job posting sites for employers that can connect you with the best tech talent. The 35 developer job posting sites listed below are essential for employers looking to hire developers in 2018:
  1. Dice
    Dice is a technology-focused job board that has connections to the world’s largest tech firms. Their data analytics software allows employers to gain an in-depth analysis of the hiring market in their field. Its connections make finding a great developer a breeze. Post web developer jobs, mobile developer openings, or engineering ads.
  2. TechFetch
    TechFetch boasts an incredible user base of over two million people. They also use an innovative matching program to help recruiters find the right candidates for their positions which saves time and money. They offer great tech talent, hosting coding and IT jobs with an expansive network that stretches the globe.
  3. CrunchBoard
    CrunchBoard serves as the job posting site of TechCrunch, the technology news website. When a web developer job is posted on the board, for example, it reaches over twelve million readers, which offers a great deal of exposure.
  4. CyberCoders
    CyberCoders is a technology-focused job site that uses their proprietary software, named Cyrus, to match the best talent with the right job. Their large staff of recruiters makes them a formidable presence in the employer job posting site world with an average wait time of five days for their candidates.
  5. Angel.co
    Angel.co, better known as AngelList, offers an incredible array of services to job posters and hunters. It allows users to find jobs that suit their needs or find positions higher up the chain at startups around the world. On AngelList, you’re likely to find self-motivated, ambitious developers.
  6. StackOverflow
    StackOverflow is the world’s largest developer community with over fifty million users worldwide. On top of offering job services, they also provide a deep learning community that can help to improve the skills of coders while they search. It acts as a multi-tool for job posters and seekers. Because of its popularity among developers, it’s simply one of the best job posting sites.
  7. Toptal
    Toptal is a global network of top developers that enables companies to scale their teams, on-demand. To be accepted into the network, all tech talent must pass a rigorous screening process, which means Toptal is the home for the top 3% of tech talent in the world. Toptal’s matching team will hand-select and connect you with the right developers for your project, usually in under 24 hours.
  8. Github
    The Github jobs program is a wonderful place to search for developer-specific employment. The site itself is very clean and easy to use. Navigating hundreds of IT jobs on the site is quick and informative while maintaining effectiveness, making your developer job ad easy to apply to.
  9. VentureLoop
    VentureLoop is a startup-focused job posting site that offers great insight into the companies and the users. This allows both sides to play fair ball. They also carry a large investor network to help startups get the funding they need to create jobs and prosperity.
  10. OnStartupJobs
    OnStartupJobs is another startup focused job posting site that centralizes on European markets. They aim to be the voice of the startup culture in Europe and they offer a variety of services to do just that. Jobs are posted from around the world and can be applied with tremendous ease. If you want tech professionals for your European startup, OnStartupJobs is the best place to post jobs.
  11. HackerLife
    HackerLife is a job board with a principal focus on software engineers. They also cover technological specialists of different passions as well. They are an ideal spot to find good engineering talent amongst some of the biggest names in the business.
  12. iCrunchData
    iCrunchData is a technology-centric job board that uses real data to determine what job you should take what you should be paid. They also report on major tech news and the future of technological employment as technology progresses. This data analysis can help candidates that are a better fit for your company find you faster.
  13. Engineering.jobs
    Engineering.jobs, as you might imagine, is an engineering-focused job site that covers every engineering industry. From electrical engineering to petroleum engineering, Engineering.jobs is the best place to post jobs to find every engineer needed.
  14. SymbaSync
    SymbaSync is recruitment software that can help you get instant results. When you post a job listing, you will receive a short list of qualified candidates within ten minutes. In addition, you will get access to a partner network of over one hundred other job boards.
  15. LinkUp
    LinkUp is a job search software platform. With LinkUp, you set a job you would like to sponsor, set a monthly budget, and put in a campaign time. Using their data-driven methods you will most likely find the right candidate quickly and efficiently using LinkUp.
  16. AngularJobs
    AngularJobs is a one-stop shop for all of your Angular employment needs. They offer a full suite of comprehensive job hunting tools and provide both remote and traditional job opportunities. Obviously, posting here and searching would be for Angular developers only, but, the site alone makes you want to learn.
  17. Glassdoor
    Glassdoor is a general use job board with a few unique features that make it an attractive job posting site for employers. Firstly, they host in-depth information about businesses on the site so that users are fully aware. Secondly, they allow users to review the company so that other users have first-hand experiences rather than guessing what working at a particular company is like. You can easily build your business brand and post an ad simultaneously.
  18. LinkedIn
    LinkedIn is a robust platform that is part social media and part job posting site. They host profiles of potential employees so that companies can get a definitive look at the professional side of its candidates. Posting on LinkedIn guarantees a large audience, making it one of the best job posting sites.
  19. Craigslist
    When you think of where to post jobs, you may not immediately come to the conclusion that Craigslist is a good place to start. While the site lacks common features and oversight, it really can be a great place to start looking for developer talent. Craigslist is a general job posting board and a general use website that has an overwhelming amount of users. However, it prides itself on being lightweight and easy to use so the postings can be as sparse or detailed as necessary. The exposure alone makes Craiglist a phenomenal job board.
  20. Snagajob
    Snagajob is a robust jobs platform that focuses on hourly workers instead of salaried employees. This is a great service for those businesses that prefer lightweight employment strategies over traditional employment. If you need strictly hourly workers, Snagajob is the best place to post jobs.
  21. Flexjobs
    Flexjobs is a powerful job board that focuses on workers who desire flexible schedules and remote work. They offer deep insider analysis of their businesses for users and proved companies with screened employees so that quality is always on top.
  22. JobsRadar
    JobsRadar is a general employment board that provides its users with a host of services to promote hireability and interview performance. You can post anything from web developer jobs to IT job openings. They have a call center for users to call in and get immediate help with their employment opportunities which has helped attract top developer talent.
  23. US.jobs
    US.jobs is a job board that reaches people all over the globe who wish to work in the United States of America, even Americans. They offer market and industry information so that users can know if they are being properly valued. You can post local job offerings for your developer opening with ease.
  24. Careers.org
    Careers.org is a powerhouse job board for posters and seekers alike. They offer detailed candidate information and business information so that no one hunts blind. They provide a wide array of services like career change resources and market salary comparison tools that can help potential candidates find your job listing.
  25. Indeed
    Indeed is one of the most prolific job boards on the market. Its various search functions and layout make it easy to use, making it the most popular job search tool on the internet. You can search thousands of resumes through their database to find the right developer for your business. Employer job posting is easy and cheap -- making it a fabulous option for companies of every size.
  26. Monster
    Monster is a massive job posting site for employers with millions of users and thousands of jobs. They offer services to improve the results of your job postings like matching candidates with the most appropriate skills for each posting. The social aspect of Monster has attracted millions of professionals.
  27. WeWorkRemotely
    Have you thought about where to post jobs for remote work? WeWorkRemotely is a technology-focused job board that boasts connections with some of the biggest names in tech. They also have over one million annual users that are trying to find remote work. Their dedication to remote work means that a post on the site should be prepared for a telecommuting candidate. 
  28. RemoteOk.io
    RemoteOk.io is a technology-focused job posting site for employers that utilizes its connections with thousands of major brands to bring the best jobs to the forefront for developers. It has a focus on remote work and thusly offers opportunities around the globe.
  29. Remote.co
    Remote.co is an entirely telecommute focused job board that boasts an entire remote framework for posters and seekers. Their site makes finding and creating remote work simple and informative thanks to their continuous comprehensive content output. They are the home for remote work.  
  30. Ladders
    Ladders is an executive general job board that is expressly concerned with elite positions. They cover a wide array of industries, including technology, and have a force of over twenty thousand recruiters who scour for the best candidates and find them on Ladders. It’s the best place to post jobs for executive positions.
  31. ZipRecruiter
    ZipRecruiter makes it easy for job posters and job seekers to find the right job fast. Posters can create quick and effective job requirements and their proprietary matching technology helps to find the right candidates as fast as possible.
  32. CareerBuilder
    CareerBuilder, as the name implies, is focused on helping users on their career paths. They provide resume analysis and job matching to make life easier for users and posters alike. They also provide recommendations for posters and seekers to help with the burden of searching.
  33. AuthenticJobs
    AuthenticJobs is a full-service job board for techies and creatives. They focus on designers and developers so that you can find artists while you search for technical workers. They also provide services to help both styles of employee find the best position for them. Posting on AuthenticJobs should give you some great developer leads.
  34. HeadHunter
    HeadHunter is one of the best places to post jobs for upper-level management positions. HeadHunter is a management and executive-focused job board that hosts jobs across most industries. They are a great resource to those companies that are seeking out managers and executives in technology, sales, and marketing.
  35. SimplyHired
    SimplyHired has jobs of all categories and makes posting them a simple and effective process. They offer a phenomenal salary estimator that allows their users to compare the projected salary against numerous similar positions.
Conclusion
There is a great developer waiting to be hired by your company. Finding the right fit for your company can take up your time and resources, but it’s more than worth the investment. Trusted sites like Toptal can help match you with the perfect developer for your tech projects in no time at all. 
 
Sites like Github and StackOverflow can connect you to massive developer communities. Their job boards are rife with trusted developer talent with impressive technical portfolios. Search through applications to help fill web developer jobs, software engineering positions, and IT jobs.
 
Other niche technology-focused job boards like Dice, TechFetch, and CyberCoders can give you a direct line to the top technical experts. While they won’t be able to provide a full suite of hiring services like a professional recruiting company can, these developer employer job posting sites can still save you time. 
 
Job posting sites for employers that are used exclusively for remote work can also be a treasure trove as they have a higher than average proportion of developer talent searching for opportunities.
 
If you want to reach as many people as possible with your job ad, you may want to consider utilizing LinkedIn, ZipRecruiter, and SimplyHired. These general job search sites can help your posting get more views. If you’ve got the time to burn, these sites can be a great resource. Otherwise, you’ll want to stick to technology-focused job boards.

Scattered Information? Try These Techniques To Rope In Wandering Data

Manually gathering data from multiple sources takes time, yet it’s a necessary task when data analysis drives your decision-making. With data coming from multiple sources and programs that don’t automatically talk to each other, processing data can be an arduous task.
 
The promise of big data is insight into customer experiences to improve products and services. To gain those insights, you need an air-tight organizational system for capturing and managing your data. The promise of big data can only be realized when disparate data sources are integrated.
 
The first step is to de-clutter; stop collecting data you don’t plan to make available to decision-makers. The second step is to make sure the data you collect is accessible. The final step is to implement company-wide policies to maintain the integrity of your data.
 
Create an organized foundation: be selective with the data you collect
 
The amount of data you could collect is infinite, but that doesn’t mean you should collect it all. Unused data accounts for about 99% of all collected data. That unused data is taking up precious space on servers and hard drives belonging to businesses across the world.
 
A 2012 Digital Universe study reported that in 2012, 2.8 trillion gigabytes of data had been collected, but only 3% was tagged and ready for use and just 0.5% was being analyzed. The study noted the small percentage of analyzed data has been shrinking as more data is collected.
 
Collecting data is the easy part. The problem is that most data isn’t being made available to decision-makers, and that’s why so little data gets used.
 
Although the Digital Universe study was performed in 2012, those numbers have remained accurate in further studies. For example, in 2015, McKinsey & Company wanted to know if data from oil rig sensors was being used for decision-making in the energy industry. They discovered less than 1% of data obtained from approximately 30,000 data points was available to energy industry decision-makers. Even if they wanted to make decisions based on all data collected, they couldn’t.
 
If you’re not going to make collected data available to the decision-makers, it won’t be of any use. Before diving into data collection, you need a plan outlining what you’re going to collect, and how you’re going to make it available to the decision-makers who need it. This is founded on an internal strategy that manages how and when team members access, collect, tag, and distribute data.
 
Create a written strategy for data management
 
To prevent different interpretations of the same data, you need a written policy dictating how data should be accessed and interpreted. You want everyone to use the same programs to crunch numbers or department heads will end up using different data points to make important decisions.
 
It’s important to have team members use tools that present data visually in some kind of dashboard. This helps them access and interpret multiple sources of data more easily. If they rely on that data to make any decisions, a visual display of data is imperative. For example, a business intelligence dashboard that displays graphs and charts makes it easy for a marketing team to identify conversion trends at a glance on a consolidated, single screen.
 
Any dashboard you create should only report information relevant to the audience using it. A dashboard is strongest when you define your target audience first. In other words, you don’t want to clutter up a sales team dashboard with financials relevant only to the accounting team.
 
In this Business Intelligence Best Practices guide, datapine confirms that dashboards achieve their highest potential when designed to eliminate clutter. Design is a form of communication, and a visually appealing dashboard communicates data effectively to its user. A lack of clutter makes that communication quick. Presenting too much data or a cluttered, complex interface will fail to serve the intended purpose.
 
Datapine’s guide also outlines the Gestalt Principles of Visual Perception that outline basic human interaction within the context of visual stimulation. The six principles include proximity, similarity, closure, enclosure, continuity, and connection. These principles should be understood by all.
 
Zero in on data accuracy and consistency
 
Thoroughly exploring the concept of data accuracy, Moz.com published this meaty analysis sharing their investigation into the accuracy of click through rates for branded vs. non-branded keywords. The article shows that the deeper you dive into verifying data accuracy, the more you learn about the programs you use and discover their limitations.
 
Although no data analysis will ever be perfect, strive to maintain consistent strategies between all who collect, analyze, and distribute data. You want all your gears moving in the same direction so if the way you manage data evolves, everyone will adapt together.

GPU History: Hitachi ARTC HD63484 The second graphics processor

With the advent of large-scale integrated circuits coming into their own in the late 1970s and early 1980s, fueling the PC revolution and several other developments, came a succession of remarkably powerful graphics controllers. NEC introduced the first LSI fully integrated graphics chip in 1982 with the NEC µ7220, and it was wildly successful finding its way into graphics terminals and workstations, but not PCs built by IBM. It did get used quite extensively by aftermarket suppliers.
 
Hitachi did NEC one better and introduced their HD63484 ACRTC Advanced CRT Controller chip in 1984. It could support a resolution up to 4096 × 4096 in a 1-bit mode within a 2 Mbyte display (frame) memory. The ACRTC also proved to be very popular and found a home in dozens of products from terminals to PC graphics boards. However, these chips, pioneers of commodity graphics controllers, were just 2D drawing engines with some built in font generation. That same year IBM introduced their EGA which with its many clones became the standard for mainstream PCs. But companies that wanted high-resolution, bit-mapped graphics, chose the Hitachi HD63484.
 
Edge Computing
 
                    ISA-16 based ELSA workstation add-in board using Hitachi HD63484
_____________________________________________________________________________
 
The LSI HD63484 was built with 2 µ CMOS technology and had around 60,000 transistors (a Motorola 68020 of the time had about 190k). The ARTC could run at 8 MHz.
 
The ARTC introduced 12k — (4096 × 4096 pixels) screen resolution; that is 8 times bigger than HD (1920 ×1080); however, it was only 1-bit deep, but it offered a unique (at the time) interleaved access mode for "flashless" displays. If you wanted 16-bit color (which it supported) than you would have to drop down to 1024 × 1024 resolution, which was astounding at the time and only a few monitors could support it. However, the super high-resolution monochrome was targeted at the emerging bit-mapped desktop publishing market. The chip had full programmability of the CRT’s timing signal capability for whatever monitor you hung on it.
 
The ARTC could support up to 2 Mbyte of video RAM and offered an asynchronous DMA bus interface that could be mapped to the PC ISA-16, VME or the P1014 16-bit busses, and according to the company, was optimized for the 68999 MPU family and the 68459 DMAC. With the DMA capability it was possible to provide Master or Slave synchronization to multiple ACRTCs or other devices.
 
Edge Computing
                       VME Force Computer add-in board based on Hitachi HD63484 chip
_____________________________________________________________________________
 
The chip offered a high-level command which reduced software development costs. In this way, the ACRTC converted logical x - y coordinates to physical frame buffer addresses. It supported 38 commands, including LINE, RECTANGLE, POLYLINE, POLYGON, CIRCLE, ELLIPSE, ARC, ELLIPSE ARC, FILLED RECTANGLE, PAINT, PATTERN and COPY. An on-chip 32-byte pattern RAM could be used for powerful graphic environments. Conditional drawing functions were available for drawing patterns, color mixing and software windowing, and it supported clipping and hitting.
 
You could control four hardware windows with the ARTC, zooming and smooth scrolling in both vertical and horizontal directions. And it offered the capability of displaying up to 256 colors and the maximum drawing speed of 2 million pixel per second in monochrome and color applications which proved to be useful in high performance CAD terminals and workstations of the time.
 
For those workstation users, there were eight user definable video attributes that could be set, and it also had light pen detection.
 
The chip was very popular and got designed into several long-lifetime products. In order to provide a continued supply, clones of the chip developed using innovASIC’s MILES — Managed IC Lifetime Extension System, cloning technology.
 
Edge Computing
                                     Block diagram of Hitachi HD63484 graphics controller
_____________________________________________________________________________
 
This was in the early days of the PC and IBM had cleverly designed an expansion bus architecture that was only 8-bits wide in the original 1981 version of the PC, but by 1984 with the introduction of the PC AT, the bus got extended to 16-bits. With that came a flurry of graphics add in boards (AIBs), and the first generation of them used the NEC µ7220. By 1986 there were 88 AIBs and the Hitachi was displacing the µ7220, appearing in 22% of the AIBs. The ARTC was a breakthrough chip and by 1988 there were 194 AIBs being offered, and 24% of them had adopted the HD63484. The ARTC was being eclipsed by a new, more powerful, and programmable graphics processor, the Texas Instruments TMS34010, which we will discuss in the next installment of Graphics Chips Hall of Fame.
 
Hitachi tried to extend the ARTC design to a 3D chip, but the bandwidth needed, and other issues were too complicated and as a result they failed in what was a valiant effort. For reasons only known to the management of Hitachi, the company abandoned the graphics market, just as it was about to take off.
 
Benchmarking
 
The original IBM PC came with an ISA-based AIB called the monochrome display adapter or MDA, and it established a set of instructions on how to drive a display. Therefore, to replace the MDA one had to build an MDA compatible board (the terms “card” and “board” were, and still are, used interchangeably). The MDA could only generate monochrome 7 x 9 dot characters.
 
Right after the PC came out, the first independent graphics AIB supplier appeared, Hercules. Hercules offered the first bit-mapped AIB with a higher resolution 720 × 350. Also, during this period, entry-level graphics boards were also being introduced. In 1984, IBM, the standards setter, introduced the EGA — enhanced graphics adapter, which provided low resolution (640 × 350) 16-color bit-mapped graphics. The EGA chip was cloned by a half dozen suppliers and in one of the installments we will discuss the clones and their evolution.
 
AutoCAD, a new low-cost, PC-based computer-aided design (CAD) program was introduce right after the PC. In 1983. Don Strimbu created a detailed single view of a firehose nozzle, which became known as “The Nozzle.”
 
Edge Computing
   Don Strimbu’s Nozzle was a 2D drawing benchmark for many years (Source CAD Nauseam)
_____________________________________________________________________________
 
The Nozzle was used as a benchmark to see how fast a graphics AIB could render it. It wasn’t a totally fair test as the PC’s processor and memory were also in the loop an could dramatically influence the result. But it was all we had at the time and was an appreciated and well used benchmark for several years. The iconic image has since been reimagined in 3D and we’ll show that too in future installments.
 
Editor’s note: this article is part of a series originally written for IEEE’s Computing Now publication. The series is ongoing and will be continued in JPR’s Tech Watch with stores about new GPU advances as well. To see other stories in the series, search the category GPU History.

How to Make Your PC Run Faster

If you’ve had your computer for more than a year or two, you’ve likely noticed its basic functions slowing down. There are many reasons for this, including the excessive (and increasing) number of files bogging the system down, and bugs in your operating system. Some of these factors can be mitigated or prevented, while others are just a natural part of a computer’s lifecycle.
 
Fortunately, there are a few important changes you can make to encourage your PC to run faster.
 
When to Replace Your PC
 
Note that while the following strategies can be used to make your PC run faster, they can only do so much. If your computer is several years old and has been subject to heavy downloading and installation, even the best strategies may only marginally improve your performance. At that point, it may be time to start shopping for deals on computers, so you can replace your unit entirely.
 
Strategies for Faster Computing
 
Try these tactics to make your PC run faster:
  • Update your computer. Updating your computer will usually help it run faster. In some cases, you may add new features, programs, or installations that have the reverse effect, but in others, you’ll update your operating system to have fewer bugs and run more efficiently. Ultimately, that results in a faster-running PC. 
  • Shut down and/or restart your computer regularly. Many consumers make the mistake of leaving their computer “on” and in a hibernating mode whenever they’re not using it, instead of shutting it down all the way. This can be highly convenient, since you won’t have to go through the entire startup process when you open your computer. However, shutting your computer down completely allows it to clear temporary files and start fresh—so you should count on doing it at least once a week. 
  • Upgrade your RAM. Much of your computer’s performance depends on its RAM, or random access memory. This allows your computer to perform multiple operations simultaneously, holding information in a kind of temporary memory. The more RAM you have, the more processes you’ll be able to perform simultaneously. Upgrading from 2 GB to 4 GB or 8 GB could substantially improve the performance of almost any computer, even one that’s several years old. 
  • Uninstall unnecessary programs. Installed programs on your computer can also bog your system down. Browse through all your current programs and uninstall anything that you haven’t used in the past six months or so. Chances are, there will be at least a few programs you don’t even remember installing. 
  • Delete temporary files. Temporary files are technical files used by your system to execute functions, and as the name implies, they’re only necessary for a temporary period of time. After that, they take up unnecessary space and slow your computer down. There are different ways to delete temporary files in Windows, depending on which system you’re using, but all of them have the power to make your device run faster—especially if you haven’t taken the step of deleting temporary files in the past. 
  • Delete big files you don’t need. Your computer’s speed also relies on the amount of free space on the machine. Go through the files on your local hard drive, and find a way to get rid of whatever you aren’t actively using. Images and videos tend to be major space hogs, so consider deleting them, storing them on an external hard drive, or uploading them to a cloud storage surface. 
  • Close out your tabs. Many modern consumers have the bad habit of constantly opening new tabs in their browser, while never closing any of their old ones. If you open up Chrome, you’ll see a dozen or more active tabs, none of which are currently necessary. This may seem innocent enough, or even convenient in some cases, but all those open tabs are running processes that slow your other computer functions down. Make sure you close out all your tabs whenever you’re done with an online session. 
  • Disable auto-launching programs. Some programs will start automatically when your computer starts up. Again, this feature was designed with convenience in mind, so the user doesn’t have to start the program manually. But if you have too many programs starting when you open your computer, it will occupy all your resources, and you won’t be able to get anything done. Think carefully about which programs you want to have at startup, and disable everything that isn’t necessary or beneficial.
Hopefully, these strategies can collectively boost your PC’s performance, and extend its lifespan by at least several months. As long as you keep your PC clear of unnecessary files and junk, you can extend the effectiveness of these improvements for months to years.  
 
Larry Alton is a professional blogger, writer, and researcher who contributes to a number of reputable online media outlets and news sources. A graduate of Iowa State University, I’m now a full-time freelance writer and business consultant.

Famous Graphics Chips: EGA to VGA

Famous Graphics Chips: EGA to VGA

The initiation of bit-mapped graphics and the chip clone wars
 
When IBM introduced the Intel 8080-based Personal Computer (PC) in 1981, it was equipped with an add-in board (AIB) called the Color Graphics Adaptor (CGA). The CGA AIB had 16 kilobytes of video memory and could drive either an NTSC-TV monitor or a dedicated 4-bit RGB CRT monitor, such as the IBM 5153 color display. It didn’t have a dedicated controller and was assembled using a half dozen LSI chips. The large chip in the center is a CRT timing controller (CRTC), typically a Motorola MC6845.
 
Edge Computing
Figure 1: IBM’s CGA Add-in board (hiteched)
 
Those AIBs were over 33 cm (13-inches) long and 10.7 cm tall (4.2-in). IBM introduced the second-generation Enhanced Graphics Adapter (EGA) in 1984 that superseded and exceeded the capabilities of the CGA. The EGA was then superseded by the VGA standard in 1987.
 
 
Edge Computing
 
Figure 2: IBM’s EGA Add-in board — notice the similarity in form factor and layout to the CGA (Vlask)
 
But the EGA established a new industry. It wasn’t an integrated chip; however, it’s I/O was well documented, and it became one of the most copied (“cloned”) AIBs in history. A year after IBM introduced the EGA AIB, Chips and Technologies came out with a chip set that duplicated what the IBM AIB could do. Within a year the low-cost clones had captured over 40% of the market. Other chip companies such as ATI, NSI, Paradise, and Tseng Labs also produced EGA clone chips, and fueled the explosion of clone-based boards. By 1986 there were over two-dozen such suppliers and the list was growing. Even the clones got cloned and Everex took a license from C&T, so it could manufacture an EGA chip for its PCs.
 
Edge Computing
 
Figure 3: With the advent of an integrated EGA controller, the AIBs started to get smaller (Old Computers)
 
The EGA controller wasn’t anything special really. It offered 640×350 resolution with 16 colors (from a 6-bit palette of 64 colors) and a pixel aspect ratio of 1:1.37. It had the ability to adjust the frame buffer’s output aspect ration by changing the resolution, giving it three additional, hard-wired, display modes: 640×350 w/2 colors, with an aspect ratio of 1:1.37, 640×200 w/16 colors and a 1:2.4 aspect ratio, and 320×200 w/16 colors and a 1:1.2 aspect ratio. Some EGA clones extended the EGA features to include 640×400, 640×480, and even 720×540 along with hardware detection of the attached monitor and a special 400-line interlace mode to use with older CGA monitors.
 
The big breakthrough for the EGA, and why it attracted so many copiers was its graphics modes were bit-mapped planar, as opposed to the previous generation interlaced CGA and Hercules AIBs. The video memory was divided into four pages (except 640×350×2, which had two pages), one for each component of the RGBI color space.
 
Each bit represented one pixel. If a bit in the red page is enabled, and none of the equivalent bits in the other pages were, a red pixel appeared in that location on-screen. If all the other bits for that pixel were also enabled, it would become white, and so forth.
 
The EGA moved us out of character-based graphics and into true bit-mapped, based on a standard. Similar things had been accomplished with mods to micros computers such as Commodore PET and Radio Shack TRS80, and direct from the manufacturer of IMSI and Color Graphics, but they did not use an integrated VLSI chip. The EGA was the last AIB to have a digital output, with VGA came analog signaling and a larger color palette.
 
EGA begets VGA to XGA
With the introduction of the IBM PC, the personal/micro and even workstation-class graphics got a new segment or category — consumer/commercial. The users in the commercial segment were not too concerned with high-resolution, and certainly not graphics performance. Certain users of spreadsheets liked higher resolution, and a special class of desktop publishing had a demand for very high-resolution. But the volume market was commercial and consumers. Even that segment was subdivided. A certain class of consumers, gamers, did want high-resolution and performance, but wouldn’t pay the price the professional graphics (i.e., workstation) user were being charged.
 
PGA
The NEC 7220, and Hitachi 63484 ACRTC discussed in previous Famous Graphics Chips articles went to the professional market. IBM, the industry leader and standard setter recognized this and in the same year it introduced the commercial/consumer class EGA it also introduced a professional graphics AIB the PGA. The PGA offered a high resolution of 640×480 pixels with 256 colors out of a palette of 4,096 colors. Refresh rate was 60 Hz. Like the EGA, the PGA was not an integrated chip.
 
8514
IBM discontinued the PGA in 1987, replacing it with the much higher resolution 8514, and breaking with the acronym description of AIBs. The 8514 could generate 1024×768 pixels at 256 colors and 43.5 Hz interlaced. The 8514 was a significant development, and IBM’s first integrated high-resolution VLSI graphics chip. The 8514 will be discussed in a future article and is mentioned here for chronological reference.
 
VGA
IBM’s video graphics array was the most significant graphics chip to ever be produced in terms of volume and longevity. The VGA was introduced with the IBM PS/2 line of computers in 1987, along with the 8514. The two AIBs shared an output connector which became the industry standard for decades, the VGA connector. The VGA connector was among things, the catalyst that lead to the formation of the Video Electronics Standards Association (VESA) in 1989. This too is a significant device and will be covered separately and is listed her to show the complexity of the market at the time and how things were rapidly changing.
 
Summary
The EGA was really the foundation controller, and later the chip of the commercial and consumer PC graphics market.
 
Edge Computing
 
 
Figure 4:History of VLSI graphics chips
 
By 1984 the computer market had consolidated to two main platforms, PCs, and workstations. Microcomputer had died off in the early 1980s due to the introduction of the PC. Gaming (also called video) consoles stayed as living room TV-based devices, and big machines called servers were replacing what had been mainframes. Supercomputers were still being produced at the rate of three or four a year. All of those machines used some type of graphics, and a few graphics terminals were still produced to serve the small but consistent high-end markets. However, by 1988 they all used standard graphics chips, sometimes several of them.
 
The EGA specification was the catalyst for the establish of some, and the increased success of other companies. One such company, AMD, is still with us, having acquired pioneer graphics company ATI (and a EGA clone maker).
 
 
Edge Computing
 
Dr. Jon Peddie is one of the pioneers of the graphics industry and formed Jon Peddie Research (JPR) to provide customer intimate consulting and market forecasting services where he explores the developments in computer graphics technology to advance economic inclusion and improve resource efficiency.
 
Recently named one of the most influential analysts, Peddie regularly advises investors in the technology sector. He is an advisor to the U.N., several companies in the computer graphics industry, an advisor to the Siggraph Executive Committee, and in 2018 he was accepted as an ACM Distinguished Speaker. Peddie is a senior and lifetime member of IEEE, and a former chair of the IEEE Super Computer Committee, and the former president of The Siggraph Pioneers. In 2015 he was given the Life Time Achievement award from the CAAD society.
 
Peddie lectures at numerous conferences and universities world-wide on topics pertaining to graphics technology and the emerging trends in digital media technology, as well as appearing on CNN, TechTV, and Future Talk TV, and is frequently quoted in trade and business publications,
 
Dr. Peddie has published hundreds of papers, has authored and contributed to no less than thirteen books in his career, his most recent, Augmented Reality, where we all will live, and is a contributor to TechWatch, for which he writes a series of weekly articles on AR, VR, AI, GPUs, and computer gaming. He is a regular contributor to IEEE, Computer Graphics World, and several other leading publications.
 
 

7 Tips for Faster 3D Rendering

3D rendering is a miracle of modern technology, capable of everything from creating lavish gaming experiences to simulating real-world environments for businesses. Unfortunately, your setup might suffer from lag, or delays that make it aggravating to render anything—but there are some simple changes that can improve your performance. 
 
Why Is 3D Rendering So Resource-Intensive?
 
Regardless of your application, 3D rendering is incredibly resource-intensive. This is partially because 3D rendering demands multiple components operating in unison, including your graphics cards, your RAM, your hard drive, and of course, the software you’re using. If even one of these components is off, your rendering speed could be negatively affected.
 
The problem is complicated by the fact that 3D rendering contains so much depth. Intuitively, you know that 3D rendering is more resource-intensive than 2D rendering because it multiplies your graphical needs by another factor. It also typically means you’re forced to render dense items, like textures, from scratch.
 
How to Decrease Rendering Times
 
So what steps can you take to decrease your rendering times?
  1. Upgrade your RAM. First, consider upgrading your RAM. Your PC’s random access memory is a fast type of memory that serves to temporarily store information your software needs in the moment and in the near future. Think of it as a holding cell or decompression chamber for your software’s information. If your RAM doesn’t have sufficient capacity, or if it has slowed over the years, you won’t be able to render things at high speed. Fortunately, swapping out your RAM is a relatively simple operation. 
  2. Invest in better software. It could also be that the program you’re running is the root of the problem. Not all 3D rendering or design software is created equal; some have natural inefficiencies that lead them to render things slowly. Experiment with different apps to see if they all have the same delays; if they do, it means the problem lies with your machine. 
  3. Tinker with your rendering settings. Your 3D rendering program likely comes with a ton of custom settings—most of which you’ve never touched. Sometimes, adjusting a few of these settings is all you need to speed up the rendering process. For example, you might reduce pre-comps, which require your pixel information to pass through many compositions before rendering to your hard drive. You can also trim certain layers, or use a different codec to make things smoother. 
  4. Buy a better graphics card. Most 3D rendering programs heavily rely on your graphics processing units GPUs to form the user interface. If yours are insufficient, or if they’re aging, it may be time for an upgrade. Just like with RAM, this is a relatively simple swap that even someone without technical experience should be able to handle. 
  5. Rely on a solid-state drive. There are two main type of hard drives—the previously standard hard disk drive (HDD) and solid state drive (SSD). SSDs are superior in most ways, allowing faster access times, offering greater reliability, and even using less power. The only downside is that they tend to be more expensive. If you want your machine to 3D render as quickly as possible, it’s definitely worth the upgrade. 
  6. Close any other programs. This is a simple step, but it’s an important one that many designers miss: close out any other programs you have open. Each program open on your computer is occupying at least some computer resources, which all take away from your 3D rendering potential. While you’re at it, consider giving your machine a full shutdown and restart. 
  7. Be selective with your effects. Part of the magic of 3D design and rendering is getting to add custom effects, which might add superior textures or make your design more realistic. However, if you’re trying to optimize for speed, these effects can hog your computer’s resources and bog things down. Be selective about which effects you apply, and consider the time cost as well as the visual advantages.
If you’re still having issues with your 3D rendering, it could mean there’s a problem with your software or files; ask other people running the same program on different machines if they’re experiencing the same issues. If not, it could mean your entire computer is due for an upgrade. It’s not a cheap solution, but it may be a necessary one if you want to reduce your 3D rendering times. 
 

Excessive Screen Time: Not a Problem Specific to Kids

Everywhere you look there are kids – from toddlers to teenagers – with noses buried in screens. Whether it’s a computer, tablet, smartphone, or TV, our youth are spending way too much time sitting in front of screens. But this isn’t a problem reserved for kids. 
 
The Universal Problem of Excessive Screen Time
 
Whether it’s a tablet, smartphone, or video game console, even today’s youngest children find themselves constantly connected – glued to screens and addicted to the release they provide.
 
According to a research study that examined nearly 900 children between the ages of six months and two years, children who use handheld screens before they begin to talk may be at a higher risk for speech delays.
 
“By their 18-month check-ups, 20 percent of the children had daily average handheld device use of 28 minutes, as reported by their parents,” ASHA explains. “Using a screening tool for language delay, researchers found that the more handheld screen time a child’s parent reported, the more likely the child was to have expressive speech delays.”
 
For every 30-minute increase in handheld screen time, research shows that children face a 49 percent increased risk of expressive speech delay. But this is far from the only impact. According to other independent studies, excessive exposure to technology also presents an increased risk of the following:
  • Obesity. More screen time means more sedentary activity. This leads to increased weight gain and a greater risk of becoming overweight or obese. 
  • Sleep problems. The blue light emitted from screens negatively impacts the brain’s sleep cycle and can lead to insomnia and other sleep-related problems.
  • Behavioral problems. It’s been shown that school-aged children who spend more than two hours per day watching TV or using a computer are more likely to have behavioral problems (including an increased propensity for bullying).
  • Educational problems. Elementary-aged children with televisions in their bedrooms are statistically more likely to perform worse on academic testing than their peers. 
  • Violence. Exposure to excessive media through movies, television shows, and video games desensitizes children to violence and makes them more likely to imitate the behaviors they see. 
But while we’d like to believe that excessive screen time is only a problem for our children, the reality is that it affects us all. American adults are spending more than 11 hours per day behind screens – watching videos, playing games, using social media, etc. – and the ill-effects are just as detrimental to our health and vitality. 
 
Excessive Screen Time in the Workforce
 
While some screen time is certainly necessary for adults to fulfill their professional obligations, it all depends on the kind of screen time people are using. Too much screen time away from work – such as using social media, playing video games, and binge-watching Netflix shows – can negatively impact members of the workforce when it comes time to use technology for productivity. 
 
All you have to do is look at a list of the top productivity-killers at work, and you’ll see that technology is largely to blame. Social networking, email, calls and text messages, and wasting time on the internet are all top factors. Then there are the physical health issues, such as headaches and vision problems. 
 
No matter how you slice it, excessive screen time is a problem for the entire family. So here are some ways to combat it:
 
1. Unplug and Connect With Family
 
“Find at least one or two opportunities during the day—at the dinner table, for example—for everyone to disconnect,” Loyola University Medical Center suggests. “Mealtime is a prime opportunity for conversation. Make a commitment and have everyone check their devices at the kitchen door.”    
 
This is an area where you should lead by example. You can’t continue to use your devices and expect your kids to take you seriously. This has to be a group effort.
 
2. Plan Screen Time Together
 
Nobody is saying all screen time is bad, but when you do have extended screen time, make it a family affair. By watching a movie together or having multiple people participate in an online game (in the same room), you maintain the social and personal interactions that so often get lost.
 
3. Find Other Activities 
 
Technology is often reverted to as a solution for boredom. If you and your family are constantly shoving your noses into screens, it’s an indication that you don’t have much going on. Try finding other activities and hobbies that interest your family. This will reduce the amount of time you have to waste on technology.
 
Adding it All up
 
Whether you’re an eight-year-old with an iPad or a 48-year-old with a smartphone, excessive screen time has a negative impact on everyone. If you want to instigate positive change in your family, start by addressing this rarely touched issue. Your family will be better for it.