Could you provide full code? The problem is not clear: why do you need to join arrays? you can check their shapes, e.g. using
len
, or, if they are numpy arrays, via
.shape
attribute.
Lets consider common steps of verifying a ML model, in general.
1) You have original dataset X and class labels y; Suppose that these arrays have shapes (n, m) and (n, ) respectively (i.e. we have m-features (# of cols) with n-measurements (# of rows) and n desired classes). These classes could be encoded with integer values (some ML frameworks works only with numerical values).
2) We could train our classifier (or model) on X and y, apply the trained model to X and get y_pred with the same shape as y, and compute some accuracy measures, such as precision, recall, accuracy etc.
measure_score(y, y_pred) => some value
. Unfortunately, doing so, we get overestimated measures of accuracy. This is due to over fitting problem.
3) A common way to overcome the overfitting problem consist in splitting original
dataset (X, y) into two datasets: (X_train, y_train) and (X_test, y_test). Usually, this splitting is performed randomly, e.g. 85 % of rows from X (and correspondingly in y) randomly selected for X_train and y_train, and 15 % are used for X_test, y_test. The first pair (X_train, y_train) is used to train our model. The second, that was not showed to the model, is used for testing: we apply the model to X_test and compare obtained y_pred with y_test; these vectors are of the same size.
So, pseudocode would be the following:
Quote:X, y -- original dataset
(X_train, y_train), (X_test, y_test) = split_data(X, y)
model -- ML-model used to solve classification problem
model.fit(X_train, y_train) --- fitting the model on train data
#From now we have fitted model, and we wish to estimate its accuracy
y_pred = model.predict(X_test) # predict classes on test data
some_accuracy_measure(y_pred, y_test) => float value (usually in [0,1])