Deciding on a language can be intimidating when you don’t have deep experience with the available options. This comparison explores the fundamental differences between C++ and Java, and what to consider when choosing between them.
AUTHOR
Timothy is an experienced software architect who has created multiple game engines with C++, including one used in more than 100 titles. His extensive background in Java ranges from Android game and application development to industry experience at Amazon, building an Android client for the AWS AppStream service. He has used more than 20 programming languages in his career, several of which he custom created to meet a specific need.
0SHARES
Countless articles compare C++ and Java’s technical features, but which differences are most important to consider? When a comparison shows, for example, that Java doesn’t support multiple inheritance and C++ does, what does that mean? And is it a good thing? Some argue that this is an advantage of Java, while others declare it a problem.
Let’s explore the situations in which developers should choose C++, Java, or another language altogether—and, even more importantly, why the decision matters.
C++ launched in 1985 as a front end to C compilers, similar to how TypeScript compiles to JavaScript. Modern C++ compilers typically compile to native machine code. Though some claim C++’s compilers reduce its portability, and they do necessitate rebuilds for new target architectures, C++ code runs on almost every processor platform.
First released in 1995, Java doesn’t build directly to native code. Instead, Java builds bytecode, an intermediate binary representation that runs on the Java Virtual Machine (JVM). In other words, the Java compiler’s output needs a platform-specific native executable to run.
Both C++ and Java fall into the family of C-like languages, as they generally resemble C in their syntax. The most significant difference is their ecosystems: While C++ can seamlessly call into libraries based on C or C++, or the API of an operating system, Java is best suited for Java-based libraries. You can access C libraries in Java using the Java Native Interface (JNI) API, but it is error-prone and requires some C or C++ code. C++ also interacts with hardware more easily than Java, as C++ is a lower-level language.
We can compare C++ to Java from many perspectives. In some cases, the decision between C++ and Java is clear. Native Android applications should typically use Java unless the app is a game. Most game developers should opt for C++ or another language for the smoothest possible real-time animation; Java’s memory management often causes lag during gameplay.
Cross-platform applications that aren’t games are beyond the scope of this discussion. Neither C++ nor Java are ideal in this case because they’re too verbose for efficient GUI development. For high-performance apps, it’s best to create C++ modules to do the heavy lifting, and use a more developer-productive language for the GUI.
In today’s technology landscape, most projects require the use of APIs. APIs bridge communication between services that may represent a single, complex system but may also reside on separate machines or use multiple, incompatible networks or languages.
Many standard technologies address the interservice communication needs of distributed systems, such as REST, SOAP, GraphQL, or gRPC. While REST is a favored approach, gRPC is a worthy contender, offering high performance, typed contracts, and excellent tooling.
Representational state transfer (REST) is a means of retrieving or manipulating a service’s data. A REST API is generally built on the HTTP protocol, using a URI to select a resource and an HTTP verb (e.g., GET, PUT, POST) to select the desired operation. Request and response bodies contain data that is specific to the operation, while their headers provide metadata. To illustrate, let’s look at a simplified example of retrieving a product via a REST API.
Here, we request a product resource with an ID of 11
and direct the API to respond in JSON format:
GET /products/11 HTTP/1.1
Accept: application/json
Given this request, our response (irrelevant headers omitted) may look like:
HTTP/1.1 200 OK
Content-Type: application/json
{ id: 11, name: "Purple Bowtie", sku: "purbow", price: { amount: 100, currencyCode: "USD" } }
While JSON may be human-readable, it is not optimal when used between services. The repetitive nature of referencing property names—even when compressed—can lead to bloated messages. Let’s look at an alternative to address this concern.
gRPC Remote Procedure Call (gRPC) is an open-source, contract-based, cross-platform communication protocol that simplifies and manages interservice communication by exposing a set of functions to external clients.
Built on top of HTTP/2, gRPC leverages features such as bidirectional streaming and built-in Transport Layer Security (TLS). gRPC enables more efficient communication through serialized binary payloads. It uses protocol buffers by default as its mechanism for structured data serialization, similar to REST’s use of JSON.
Unlike JSON, however, protocol buffers are more than a serialized format. They include three other major parts:
.proto
files (We’ll follow proto3, the latest protocol buffer language specification.)The remote functions that are available on a service (defined in a .proto
file) are listed inside the service node in the protocol buffer file. As developers, we get to define these functions and their parameters using protocol buffers’ rich type system. This system supports various numeric and date types, lists, dictionaries, and nullables to define our input and output messages.
These service definitions need to be available to both the server and the client. Unfortunately, there is no default mechanism to share these definitions aside from providing direct access to the .proto
file itself.
Social network analysis is quickly becoming an important tool to serve a variety of professional needs. It can inform corporate goals such as targeted marketing and identify security or reputational risks. Social network analysis can also help businesses meet internal goals: It provides insight into employee behaviors and the relationships among different parts of a company.
Organizations can employ a number of software solutions for social network analysis; each has its pros and cons, and is suited for different purposes. This article focuses on Microsoft’s Power BI, one of the most commonly used data visualization tools today. While Power BI offers many social network add-ons, we’ll explore custom visuals in R to create more compelling and flexible results.
This tutorial assumes an understanding of basic graph theory, particularly directed graphs. Also, later steps are best suited for Power BI Desktop, which is only available on Windows. Readers may use the Power BI browser on Mac OS or Linux, but the Power BI browser does not support certain features, such as importing an Excel workbook.
Creating social networks starts with the collection of connections (edge) data. Connections data contains two primary fields: the source node and the target node—the nodes at either end of the edge. Beyond these nodes, we can collect data to produce more comprehensive visual insights, typically represented as node or edge properties:
1) Node properties
2) Edge properties
Let's not beat around the bush: React is a strong JavaScript library that lets you build scalable UI interfaces. When it comes to internationalizing your app, though, it does not offer a built-in solution. Fortunately, there are some amazing open-source libraries that can help you manage your i18n project successfully from start to finish.
This curated list features the best libraries for React i18n. It will walk you through the pros and cons of each option in terms of flexibility, scalability, and most of all, developer productivity.
Since almost every programming language has different rules and conventions, and adapting to them within the scope of those libraries may be tricky, understanding the pros and cons of each library might take time and effort.
At the time of writing, we are using the latest versions of React v16.11.0 and React Router v5.1.2. You can find all code examples in our GitHub repo.
Contents
you are a serious Node.js software engineer, working with Express, Koa, or a similar framework, you will need to be able to internationalize your app so it can support different locales. In this tutorial, you will learn how to set up i18n support in Node.js and organize your translations for your app to reach as many international users as possible.
Node.js is an asynchronous event-driven JavaScript runtime designed to help build scalable network applications. In essence, it allows JavaScript to run in the backend as a server-side code.
The Node ecosystem is vast and it relies on community projects. Although there are numerous tutorials online exploring Node.js and its libraries, the topic of Node internationalization is almost left behind.
So, when you have the need for scalable I18n solutions that are easy to use and implement, it pays to make some sensible software architecture demissions upfront.
This tutorial will try to fill that gap by showing ways of integrating i18n and adapting to different cultural rules and habits in your Node.js applications in a sensible manner.
For the purposes of this tutorial, I will be using the latest Node.js LTS runtime v8.94, and the code for this tutorial is hosted on GitHub. For convenience, we are going to use the –experimental-modules flag in order to use es6 imports in code. You can achieve the same result using babel with preset-es2015.
Let’s get started!
Contents
One of the most popular open-source i18n libraries, ngx-translate, lets you define translations for your app and switch between them dynamically. You can either use a service, directive, or pipe to handle the translated content. In this Angular 13 tutorial, we will learn how to use them all with the help of a small demo app.
For demonstration purposes, we will create a sample feedback form for Phrase, the most reliable software localization platform on the market, and launch our demo app in two different languages—English and German.
You can access the demo app via Google Firebase to understand how ngx-translate works with an Angular 13 app in a production environment. To get the source code for the demo app, make sure you stop by at GitHub.
???? Note » Make sure you have an Angular dev environment set up on your machine. Should this not be the case, please refer to the Angular setup guide.
Contents
The Angular framework has a robust built-in i18n library. However, the ngx-translate library has some shiny advantages over the built-in one:
Navigate to the directory where you want to create the new project. Open the command prompt, and run the command shown below to create a new Angular app named ngx-translate-i18n
.
ng new ngx-translate-i18n --routing=false --style=scss
Run the following command to install the ngx-translate/core
library in your app:
npm install @ngx-translate/core
We will need to install a loader that will help us load the translations from files using HttpClient
. Run the command as follows:
npm install @ngx-translate/http-loader
We will add a separate module for ngx-translate. Run the following command to create a new module in your app.
If you plan to launch an Android app for international markets, considering accessibility features and services as early as possible in the internationalization process can help you boost your app's reach from day one. Here's how to make your Android app cater to visually impaired users.
Globally, more than 2B people have a vision impairment. If you plan to launch an Android app for multiple markets, integrating accessibility features and services as early as possible into the Android internationalization process can help you increase your app’s reach even more. This tutorial will show you how to make Android accessibility an inherent part of your app and cater to visually impaired users.
Contents
Android accessibility services require devices with at least Android 6.0 Marshmallow. They include Talkback, Accessibility Menu, Select to Speak, Switch Access, and CallApp.
In this tutorial, we’ll focus on Talkback, the Google screen reader that gives eyes-free control of a mobile device.
Most original equipment manufacturers (OEMs) will have the service installed by default, but for some phones, you’ll have to download Talkback manually from the Play Store and activate it via the following path: Settings > Accessibility > Talkback.
???? Note » You can also enable and disable Talkback via adb by using the following commands:
// disable
adb shell settings put secure enabled_accessibility_services com.android.talkback/com.google.android.marvin.talkback.TalkBackService
// enable
adb shell settings put secure enabled_accessibility_services com.google.android.marvin.talkback/com.google.android.marvin.talkback.TalkBackService
With the help of intelligent gestures, Talkback supports visually impaired users in navigating and interacting with a mobile screen without actually looking at it. It talks back to the user, telling all the appropriate information that the user might need from the app. In this article, we’ll have a look at several ways to make Talkback announce localized text so we provide global audiences with accessible apps.
Users browse a mobile app screen by swiping from left to right. Following each swipe, Talkback announces the items on the screen. When the user swipes from top to bottom, Talkback stays focused on that element and suggests actions related to it
No-code development tools allow people to build software by dragging and dropping graphical objects. Credit: AppOnboard, Inc.
Traditional computer programming has a steep learning curve that requires learning a programming language, for example C/C++, Java or Python, just to build a simple application such as a calculator or Tic-tac-toe game. Programming also requires substantial debugging skills, which easily frustrates new learners. The study time, effort and experience needed often stop nonprogrammers from making software from scratch.
No-code is a way to program websites, mobile apps and games without using codes or scripts, or sets of commands. People readily learn from visual cues, which led to the development of "what you see is what you get" (WYSIWYG) document and multimedia editors as early as the 1970s. WYSIWYG editors allow you to work in a document as it appears in finished form. The concept was extended to software development in the 1990s.
There are many no-code development platforms that allow both programmers and nonprogrammers to create software through drag-and-drop graphical user interfaces instead of traditional line-by-line coding. For example, a user can drag a label and drop it to a website. The no-code platform will show how the label looks and create the corresponding HTML code. No-code development platforms generally offer templates or modules that allow anyone to build apps.
Early days
In the 1990s, websites were the most familiar interface to users. However, building a website required HTML coding and script-based programming that are not easy for a person lacking programming skills. This led to the release of early no-code platforms, including Microsoft FrontPage and Adobe Dreamweaver, to help nonprogrammers build websites.
Following the WYSIWYG mindset, nonprogrammers could drag and drop website components such as labels, text boxes and buttons without using HTML code. In addition to editing websites locally, these tools also helped users upload the built websites to remote web servers, a key step in putting a website online.
However, the websites created by these editors were basic static websites. There were no advanced functions such as user authentication or database connections.
Website development
There are many current no-code website-building platforms such as Bubble, Wix, WordPress and GoogleSites that overcome the shortcomings of the early no-code website builders. Bubble allows users to design the interface by defining a workflow. A workflow is a series of actions triggered by an event. For instance, when a user clicks on the save button (the event), the current game status is saved to a file (the series of actions).
Meanwhile, Wix launched an HTML5 site builder that includes a library of website templates. In addition, Wix supports modules—for example, data analysis of visitor data such as contact information, messages, purchases and bookings; booking support for hotels and vacation rentals; and a platform for independent musicians to market and sell their music.
WordPress was originally developed for personal blogs. It has since been extended to support forums, membership sites, learning management systems and online stores. Like WordPress, GoogleSites lets users create websites with various embedded functions from Google, such as YouTube, Google Maps, Google Drive, calendar and online office applications.
Game and mobile apps
In addition to website builders, there are no-code platforms for game and mobile app development. The platforms are aimed at designers, entrepreneurs and hobbyists who don't have game development or coding knowledge.
GameMaker provides a user interface with built-in editors for raster graphics, game level design, scripting, paths and "shaders" for representing light and shadow. GameMaker is primarily intended for making games with 2D graphics and 2D skeletal animations.
Buildbox is a no-code 3D game development platform. The main features of Buildbox include the image drop wheel, asset bar, option bar, collision editor, scene editor, physics simulation and even monetization options. While using Buildbox, users also get access to a library of game assets, sound effects and animations. In addition, Buildbox users can create the story of the game. Then users can edit game characters and environmental settings such as weather conditions and time of day, and change the user interface. They can also animate objects, insert video ads, and export their games to different platforms such as PCs and mobile devices.
Games such as Minecraft and SimCity can be thought of as tools for creating virtual worlds without coding.
Future of no-code
No-code platforms help increase the number of developers, in a time of increasing demand for software development. No-code is showing up in fields such as e-commerce, education and health care.
I expect that no-code will play a more prominent role in artificial intelligence, as well. Training machine-learning models, the heart of AI, requires time, effort and experience. No-code programming can help reduce the time to train these models, which makes it easier to use AI for many purposes. For example, one no-code AI tool allows nonprogrammers to create chatbots, something that would have been unimaginable even a few years ago.
Object tracking—following objects over time—is an essential image analysis technique used to quantify dynamic processes in biosciences. A new application called TrackMate v7 enables scientists to track objects in images easily. TrackMate is a free, open-source tool available as part of the Fiji image analysis platform.
"TrackMate allows scientists to tackle complex tracking problems more efficiently, accelerating discoveries in life sciences across fields," says Guillaume Jacquemet, Academy Research Fellow at Åbo Akademi University and one of the researchers involved in TrackMate development.
In life sciences, tracking is used, for instance, to follow the movement of molecules, subcellular organelles, bacteria, cells, and whole animals. However, due to the sheer diversity of images used in research, no single application can address every tracking challenge.
Bacteria growth (Neisseria meningitidis) was followed over time using TrackMate v7. A track and lineage of a single bacterium are highlighted in green, and changes in bacteria shape (area and circularity) over the tracking period were plotted. Bacteria division can be observed through the dramatic changes in the area and circularity. Credit: Nature Methods (2022). DOI: 10.1038/s41592-022-01507-1
TrackMate v7 offers automated and semi-automated tracking algorithms and advanced visualization and analysis tools. To analyze a wide variety of images, the application relies on artificial intelligence solutions and other advanced segmentation algorithms to detect objects from images.
"This new feature widely increases the breadth of TrackMate applications and capabilities. For instance, we show that TrackMate v7 can be used to follow moving cancer cells, immune cells, or stem cells. It can also be used to follow bacteria growth," says Jacquemet.
"We are currently using the software in the Jacquemet laboratory to study the mechanisms enabling cancer metastasis. We can now produce better and more informative data much faster than ever before," he adds.
The development of TrackMate v7 was coordinated by the Jacquemet (Åbo Akademi University, Turku, Finland) and Tinevez laboratories (Pasteur Institute, Paris, France).
Their research is published in Nature Methods.
Explore further
Brain responses being used as supervision signals for semantic image editing. Credit: Tuukka Ruotsalo et al
Soon, computers could sense that users have a problem and come to the rescue. This is one of the possible implications of new research at University of Copenhagen and University of Helsinki.
"We can make a computer edit images entirely based on thoughts generated by human subjects. The computer has absolutely no prior information about which features it is supposed to edit or how. Nobody has ever done this before," says Associate Professor Tuukka Ruotsalo, Department of Computer Science, University of Copenhagen.
The results are presented in a paper accepted for publication at the CVPR 2022 (Computer Vision and Pattern Recognition).
Brain activity as the sole input
In the underlying study, 30 participants were equipped with hoods containing EEG electrodes that map electrical brain signals. All participants were given the same 200 facial images to look at. Also, they were given a series of tasks such as looking for female faces, looking for older people, looking for blond hair, etc.
The participants did not perform any actions and looked briefly at the images—0.5 second for each image. Based on their brain activity, the machine first mapped the given preference and then edited the images accordingly. So if the task was to look for older people, the computer would modify the portraits of the younger persons, making them look older. And if the task was to look for a given hair color, all images would get that color.
"Notably, the computer had no knowledge of face recognition and would have no idea about gender, hair color, or any other relevant features. Still, it only edited the feature in question, leaving other facial features unchanged," comments Ph.D. Student Keith Davis, University of Helsinki.
Some may argue that plenty of software capable of manipulating facial features already exists. That would be missing the point, Keith Davis explains:
"All the existing software has been previously trained with labeled input. So, if you want an app which can make people look older, you feed it thousands of portraits and tell the computer which ones are young, and which are old. Here, the brain activity of the subjects was the only input. This is an entirely new paradigm in artificial intelligence—using the human brain directly as the source of input."
Possible applications in medicine
One possible application could be in medicine: "Doctors already use artificial intelligence in interpretation of scanning images. However, mistakes do happen. After all, the doctors are only assisted by the images but will take the decisions themselves. Maybe certain features in the images are more often misinterpreted than others. Such patterns might be discovered through an application of our research," says Tuukka Ruotsalo.
Another application could be assistance to certain groups of disabled people, for instance allowing a paralyzed person to operate his or her computer.
"That would be fantastic," says Tuukka Ruotsalo. "However, that is not the focus of our research. We have a broad scope, looking to improve machine learning in general. The range of possible applications will be wide. For instance, 10 or 20 years from now we may not need to use a mouse or type commands to operate our computer. Maybe we can just use mind control."
Calls for policy regulation
However, the coin does have a flip side, according to Tuukka Ruotsalo: "Collecting individual brain signals does involve ethical issues. Whoever acquires this knowledge could potentially obtain deep insight into a persons' preferences. We already see some trends. People buy 'smart' watches and similar devices able to record heart rate etc., but are we sure that data are not generated which give private corporations knowledge which we wouldn't want to share?"
"I see this as an important aspect of academic work. Our research shows what is possible, but we shouldn't do things just because they can be done. This is an area which in my view needs to be regulated by guidelines and public policies. If these are not adapted, private companies will just go ahead."